But as you can see, the maintainer didn’t stop using them and will also now not disclose which commits have them. Humans are emotional creatures and part of being rational is acknowledging that. Folks can be critical of AI usage while phrasing the issue more tactfully and would likely see more success when doing so.
It may be that he could be sued for license infringement for violating the GPL 3 license by feeding code and using improperly licensed code.
Idk how all the lawsuits will fall, but imo, by not disclosing AI use it jeopardizes the license requirements for everyone who ever contributed to the project. Best case the project is essentially public domain for any components edited after this change.
Yeah, I’m interested to see how it turns out. Realistically I don’t think we’ll see models training on GPL code making the model or it’s output “GPL’ed” because (I think, but I could be mistaken) there’s already been a court case about training models on copyrighted content and the court ruled that it was okay. The GPL, while extremely restrictive, is still more permissive than the default “all rights reserved” approach of copyright. That is to say, if courts ruled that copyrighted content in models is fine they’d also rule that copyleft content in models is fine. (Which sucks, and not really something I’m sure I agree with, but I’m also not a lawyer or a judge.)
My understanding is that, regardless of it was AI or not, machine output cannot be copyrighted. I’m not sure where the line is and how much tweaking you’d need to do to for it to suddenly become something you’re protected under copyright. With things like code, as opposed to images, I think we’ll likely see that devs get copyright over it. Because I think most of the time they’re tweaking it some. Generally with image generation I don’t think folks are tweaking the output, unless they themselves are an artist, and for the most part most artists I’ve seen are more opposed to AI than devs. But who knows? It’ll probably take someone copying code that was created by AI and the creator/prompter having to backup that what they did was enough to grant them protection under copyright law. But by that point, I’m really talking out of my depth, this is just a guess.
My most realistic outlook on it is fairly pessimistic. I think model creators will still be able to use copyrighted and copyleft works however they see fit and I think for all practical purposes most folks using generators will likely be tweaking or prompting creatively enough in some way to successfully argue that the result is something they made using the AI as a tool rather than something the computer just generated on its own.
Imo, I’d prefer a “contamination” approach where the strictest licenses in the training data applies to all outputs. I doubt such a rule would get through big business filters but it would maximize the public good and any country that does manage something like that would probably gain the most benefits from these companies.
The strictest license in the training set is definitely just the normal copyright protections, which is more strict than copyleft.
Edit: explanation, this is because everything is inherently copyrighted. You have rights to protections. So you’re forgoing some of those protective rights by licensing it out.
I don’t really think there’s a problem with saying this sort of thing about devs who use AI if you believe all AI code usage is bad. I’m only saying that if you actually want them to stop using AI instead of just expressing your disdain then there are better approaches. Opening an issue to insult a volunteer developer on their personal project will not get the change you want to see.
I am a strong believer in the power of shame. Republican racists must be taught that their options are either to bend to my will or shut up permanently. I don’t really care if they agree with me or not.
Now, that’s pretty provocative. I am not presently mounting rifles at LLM users. But, I do think it shows that I have more determination than you do.
I guess that’s sort of the disconnect for me. I’m imagining a world where the maintainer, instead of using AI and signing the commits off that they did, that instead they were putting a Nazi slogan in every commit message. My opinion would be different. I wouldn’t have this middle of the road sort of “maybe you should try to actually get them to change what they’re doing instead of shaming them.” Hell, if that were the case I’d probably join in too, or at least throw a thumbs down on their defense of themselves. And I’m not trying to compare AI usage to genocide or say that folks view them as equivalent, I’m just saying that there are topics where I do think going fully on the offensive are warranted.
Maybe I really should self reflect on that, because I am a firm believer that protests aren’t meant to be comfortable. Maybe me saying “they shouldn’t insult a volunteer on an issue tracker” is the same as people complaining about “politics in football” by saying Kaepernick shouldn’t have been taking a knee during the national anthem, in some ways.
I had a donation to Lutris, and was already skeptical of the dev’s ability to maintain their huge (and very buggy) python/gtk3 codebase. Now I know that giving money to the dev would likely makes things bigger and buggier. This is useful information, and it’s better to talk about it somewhere where the dev will respond and relatively few bystanders will hear the discussion.
I’m not saying you shouldn’t ever raise this sort of thing as an issue (in general I think issues should only be for bugs, but the annoying reality is there’s rarely a better place for discussions that get visibility), I’m saying the specific content of the message is the problem. There are ways to critique the usage of AI and discuss alternatives that wouldn’t be an issue.
For example,
I see a lot of AI code is used in this repository. AI code is bad because (reasons the user believes it is bad here). Could you please share why/what AI is being used for specifically so we can try to remove the necessity?
Aside
I’m not saying AI code isn’t bad, I’m just saying different people think it’s bad for different reasons. The specific problem the reporter has with AI code may warrant a specific response.
Perhaps more maintainers are needed, maybe someone more familiar with third party libs being used could mentor, etc. From there it really depends on what the response from the maintainer is.
What’s not helpful and never going to get anyone to change their opinion is just saying things like “when will @mention see the error of their ways”. As humans we respond to this by digging our heels in, which as seen in the issue the maintainer did by becoming less transparent about where AI is and is not used. Had the reporter taken a more diplomatic approach they would have been more likely to get the changes they wanted.
It’s also such self entitlement, they were being open about it before but had to deal with childish people like this throwing a tantrum.
If its such an issue then thank them for being honest, don’t use it and move on, no ones entitled to free software though some act like it.
Not all llm use in code gen is bad, as long as its properly reviewed and disclosed. That’s not the same as vibe coding and having no idea about the output.
Yeah, that’s sort of my gripe with it. If you genuinely believe all AI code is bad (which is fine, not saying that’s a “wrong” opinion) maybe try to help the volunteers instead of just insulting them on an issue tracker.
Regardless of your opinion on AI, it is not productive or helpful to open this as an issue.
Disagree. It drew attention to the fact that the maintainers of lutris are of questionable character and helped people like me understand that lutris should be avoided completely.
Maybe, I don’t know much about this tool or their practices. I only meant that it was factual that they were mentioning which commits had AI generated code in them.
Regardless of your opinion on AI, it is not productive or helpful to open this as an issue.
shame is a powerful weapon
i for one intend to keep making people feel bad for using slop generators
But as you can see, the maintainer didn’t stop using them and will also now not disclose which commits have them. Humans are emotional creatures and part of being rational is acknowledging that. Folks can be critical of AI usage while phrasing the issue more tactfully and would likely see more success when doing so.
It may be that he could be sued for license infringement for violating the GPL 3 license by feeding code and using improperly licensed code.
Idk how all the lawsuits will fall, but imo, by not disclosing AI use it jeopardizes the license requirements for everyone who ever contributed to the project. Best case the project is essentially public domain for any components edited after this change.
Yeah, I’m interested to see how it turns out. Realistically I don’t think we’ll see models training on GPL code making the model or it’s output “GPL’ed” because (I think, but I could be mistaken) there’s already been a court case about training models on copyrighted content and the court ruled that it was okay. The GPL, while extremely restrictive, is still more permissive than the default “all rights reserved” approach of copyright. That is to say, if courts ruled that copyrighted content in models is fine they’d also rule that copyleft content in models is fine. (Which sucks, and not really something I’m sure I agree with, but I’m also not a lawyer or a judge.)
My understanding is that, regardless of it was AI or not, machine output cannot be copyrighted. I’m not sure where the line is and how much tweaking you’d need to do to for it to suddenly become something you’re protected under copyright. With things like code, as opposed to images, I think we’ll likely see that devs get copyright over it. Because I think most of the time they’re tweaking it some. Generally with image generation I don’t think folks are tweaking the output, unless they themselves are an artist, and for the most part most artists I’ve seen are more opposed to AI than devs. But who knows? It’ll probably take someone copying code that was created by AI and the creator/prompter having to backup that what they did was enough to grant them protection under copyright law. But by that point, I’m really talking out of my depth, this is just a guess.
My most realistic outlook on it is fairly pessimistic. I think model creators will still be able to use copyrighted and copyleft works however they see fit and I think for all practical purposes most folks using generators will likely be tweaking or prompting creatively enough in some way to successfully argue that the result is something they made using the AI as a tool rather than something the computer just generated on its own.
Imo, I’d prefer a “contamination” approach where the strictest licenses in the training data applies to all outputs. I doubt such a rule would get through big business filters but it would maximize the public good and any country that does manage something like that would probably gain the most benefits from these companies.
The strictest license in the training set is definitely just the normal copyright protections, which is more strict than copyleft.
Edit: explanation, this is because everything is inherently copyrighted. You have rights to protections. So you’re forgoing some of those protective rights by licensing it out.
This specific developer is not the only audience to this behavior.
What do you think is more likely from devs who use AI who see this?
They will obviously stop saying they use AI, much like republicans pretend they’re not racist. So?
I call the both of them cowards who refuse to stand up for what they supposedly believe in.
I don’t really think there’s a problem with saying this sort of thing about devs who use AI if you believe all AI code usage is bad. I’m only saying that if you actually want them to stop using AI instead of just expressing your disdain then there are better approaches. Opening an issue to insult a volunteer developer on their personal project will not get the change you want to see.
I am a strong believer in the power of shame. Republican racists must be taught that their options are either to bend to my will or shut up permanently. I don’t really care if they agree with me or not.
Now, that’s pretty provocative. I am not presently mounting rifles at LLM users. But, I do think it shows that I have more determination than you do.
I guess that’s sort of the disconnect for me. I’m imagining a world where the maintainer, instead of using AI and signing the commits off that they did, that instead they were putting a Nazi slogan in every commit message. My opinion would be different. I wouldn’t have this middle of the road sort of “maybe you should try to actually get them to change what they’re doing instead of shaming them.” Hell, if that were the case I’d probably join in too, or at least throw a thumbs down on their defense of themselves. And I’m not trying to compare AI usage to genocide or say that folks view them as equivalent, I’m just saying that there are topics where I do think going fully on the offensive are warranted.
Maybe I really should self reflect on that, because I am a firm believer that protests aren’t meant to be comfortable. Maybe me saying “they shouldn’t insult a volunteer on an issue tracker” is the same as people complaining about “politics in football” by saying Kaepernick shouldn’t have been taking a knee during the national anthem, in some ways.
Naw dog
Well, it used to be at least
I had a donation to Lutris, and was already skeptical of the dev’s ability to maintain their huge (and very buggy) python/gtk3 codebase. Now I know that giving money to the dev would likely makes things bigger and buggier. This is useful information, and it’s better to talk about it somewhere where the dev will respond and relatively few bystanders will hear the discussion.
I’m not saying you shouldn’t ever raise this sort of thing as an issue (in general I think issues should only be for bugs, but the annoying reality is there’s rarely a better place for discussions that get visibility), I’m saying the specific content of the message is the problem. There are ways to critique the usage of AI and discuss alternatives that wouldn’t be an issue.
For example,
Aside
I’m not saying AI code isn’t bad, I’m just saying different people think it’s bad for different reasons. The specific problem the reporter has with AI code may warrant a specific response.
Perhaps more maintainers are needed, maybe someone more familiar with third party libs being used could mentor, etc. From there it really depends on what the response from the maintainer is.
What’s not helpful and never going to get anyone to change their opinion is just saying things like “when will
@mentionsee the error of their ways”. As humans we respond to this by digging our heels in, which as seen in the issue the maintainer did by becoming less transparent about where AI is and is not used. Had the reporter taken a more diplomatic approach they would have been more likely to get the changes they wanted.It’s also such self entitlement, they were being open about it before but had to deal with childish people like this throwing a tantrum.
If its such an issue then thank them for being honest, don’t use it and move on, no ones entitled to free software though some act like it.
Not all llm use in code gen is bad, as long as its properly reviewed and disclosed. That’s not the same as vibe coding and having no idea about the output.
Yeah, that’s sort of my gripe with it. If you genuinely believe all AI code is bad (which is fine, not saying that’s a “wrong” opinion) maybe try to help the volunteers instead of just insulting them on an issue tracker.
Disagree. It drew attention to the fact that the maintainers of lutris are of questionable character and helped people like me understand that lutris should be avoided completely.
As the maintainer said, the commits with AI code were already specified. See one here. It was never a secret.
He now removed the code authorship from Claude lmao
Hence the past tense. I think it was pretty petty to do this.
It was my impression that the AI stuff only started with a relatively recent update
Maybe, I don’t know much about this tool or their practices. I only meant that it was factual that they were mentioning which commits had AI generated code in them.