How to drive off users and contributors in one easy step!
is lutris slop now
i can’t help but notice quite a lot of LLM generated commits, is lutris slop now or will
@strycoresee the error of their waysRegardless of your opinion on AI, it is not productive or helpful to open this as an issue.
Regardless of your opinion on AI, it is not productive or helpful to open this as an issue.
Disagree. It drew attention to the fact that the maintainers of lutris are of questionable character and helped people like me understand that lutris should be avoided completely.
As the maintainer said, the commits with AI code were already specified. See one here. It was never a secret.
He now removed the code authorship from Claude lmao
I had a donation to Lutris, and was already skeptical of the dev’s ability to maintain their huge (and very buggy) python/gtk3 codebase. Now I know that giving money to the dev would likely makes things bigger and buggier. This is useful information, and it’s better to talk about it somewhere where the dev will respond and relatively few bystanders will hear the discussion.
I’m not saying you shouldn’t ever raise this sort of thing as an issue (in general I think issues should only be for bugs, but the annoying reality is there’s rarely a better place for discussions that get visibility), I’m saying the specific content of the message is the problem. There are ways to critique the usage of AI and discuss alternatives that wouldn’t be an issue.
For example,
I see a lot of AI code is used in this repository. AI code is bad because (reasons the user believes it is bad here). Could you please share why/what AI is being used for specifically so we can try to remove the necessity?
Aside
I’m not saying AI code isn’t bad, I’m just saying different people think it’s bad for different reasons. The specific problem the reporter has with AI code may warrant a specific response.
Perhaps more maintainers are needed, maybe someone more familiar with third party libs being used could mentor, etc. From there it really depends on what the response from the maintainer is.
What’s not helpful and never going to get anyone to change their opinion is just saying things like “when will
@mentionsee the error of their ways”. As humans we respond to this by digging our heels in, which as seen in the issue the maintainer did by becoming less transparent about where AI is and is not used. Had the reporter taken a more diplomatic approach they would have been more likely to get the changes they wanted.
While it may become impossible to determine whether those digitized pixels are “real” or not, I sense that analog will be making a comeback in the not too distant future.
I tried Faugus for WoW and it ran like shit. I tried Lutris because it was pre-installed on Bazzite and wow was the performance better.
That’s a weird way to run a community facing project, if you want to engage the community that is.
If you treat it like your own personal hobby, you can do whatever you like.
I had to google “Lutris” to remember what it was. I have it installed… I guess this post made me realize how little I use it and that I should uninstall the slop.
This is the way.
Other cool techniques:
- keep a private git repo with CLAUDE.md etc and then push into the public repo without those files.
- insert bugs and typos that are so clumsy no AI would ever do them
- their repo (checked the commit graphs and basically they did most of the work, 2nd dev agree with them, covers 90%+) their choice of governance
- their repo, their choice of tooling
- I genuinely believe they think are doing “good enough” code and they are probably right about it in their context
- they do have fair points on the economical power dynamics, namely that yes Anthropic is slightly less worst than Meta, Google, OpenAI, Microsoft, etc (… but IMHO honestly that’s a damn low bar)
but also
- obfuscation rather than discussion (closed the issue and limited to maintainers only) so clearly the signal is precisely “my repo, my choice”
- no mention of the copyright or license washing
- no mention of ecological impact
so I would personally consider instead Bottles, GOG (have different problems), Steam (obviously not open source and basically monopolistic position), etc.
Overall I think preventing discussion is healthy (even though sadly sometimes needed, here I lack context, maybe the issue poster did this numerous time on other platforms, title definitely was provocative) but removing provenance is NEVER a good choice. They want to use Claude on their repo? Absolutely fine (even though not to me) but hiding it makes it instantly untrustworthy to me. In fact I even argued in the past that even though I personally do not use GenAI/LLMs (for coding or otherwise) except for testing it should always be disclosed precisely so that others can make THEIR choice in consequence, including using or contributing, cf https://fabien.benetou.fr/Analysis/AgainstPoorArtificialIntelligencePractices
GOG (have different problems)
but GOG is not open source too (if you use GOG Galaxy)
Yes, that’s part of the problems, thanks for clarifying
Yw
FFS i just move two giant modlists from my windows migration…
I don’t think people realize how effective current gen AI is, and are instead drawing opinions from years old chatgpt or Google “ai overviews” or whatever they call it. If you know what you’re doing, which seems self evident here, AI tools can massively expand your software engineering productivity. AI “coauthoring” I always read as a marketing move, ultimately the submitting human is and should be responsible for the content. You don’t and can’t know what process they used to make it, evaluate it on its own merits.
There’s a massive pile of ethical, moral, and political issues with use of AI, absolutely. But this is “but you participate in capitalism, therefore you’re a hypocrite” tier of criticism. If amoral corporations are the only ones using these tools, and open source “stays pure”, all we get is even more power concentrating with the corporations. This isn’t Batman, “This is the weapon of the enemy. We do not need it. We will not use it.”
This is close to paradox of tolerance territory, wherein if one side uses the best weapons and the other doesn’t out of moral restraint, the outcome is the amoral side winning.
Also on a technical note, the public domain/non copyrightable arguments are wrong. The cases that have been decided so far have consistently ruled that there needs to be substantial human authorship true, but that’s a pretty low floor. Basically, you can’t copyright a work that’s the result of a single prompt. Effective use of AI in non trivial code based involves substantial discretion in picking out what to address, the process of addressing it, and rejecting, modifying, and itersting on outputs. Lutris is a large engineering project with a lot of human authorship over time, anything the author does with AI at this point is going to be substantially human authored.
Also, Open Claw isn’t the apocalyptic vulnerability like it’s reported as being. Any model with search and browser access has a non zero chance of prompt injection compromise, absolutely. But using Open Claw therefore vulnerable isn’t a sound jump to make, Open Claw doesn’t even necessarily have browser access in the first place. Again, capabilities have improved as well; this isn’t the old days when you could message “ignore previous instructions” and have that work. Someone did an experiment lately wherein they set up a Claude Opus 4.6 model in an environment with an email and secrets. I don’t recall for sure if it was using Open Claw specifically, but that style harness. They challenged the Internet to email the bot and try to convince it to email back the secrets. Nobody even got it to reply.
Tldr: it’s coming for us all, sticking your head in the sand isn’t going to save you.
But this is “but you participate in capitalism, therefore you’re a hypocrite” tier of criticism
There is no contest going on. No competition. There’s no rush for productivity.
You do not NEED to use genAI.
Check out Asahi Linux for a great example of a good AI policy:
https://asahilinux.org/docs/project/policies/slop/
It is the opinion of the Board that Large Language Models (LLMs), herein referred to as Slop Generators, are unsuitable for use as software engineering tools, particularly in the Free and Open Source Software movement.
The use of Slop Generators in any contribution to the Asahi Linux project is expressly forbidden. Their use in any material capacity where code, documentation, engineering decisions, etc. are largely created with the “help” of a Slop Generators will be met with a single warning. Subsequent disregard for this policy will be met with an immediate and permanent ban from the Asahi Linux project and all associated spaces.
-
LLMs are not a vital resource like food or electricity. Refusing to participate will at worst be an inconvenience.
-
Software can coexist. One application won’t kill another just because its developers can put out more code per hour. If it were otherwise, Linux wouldn’t exist.
-
You’d think open source movement would take advantage of VC funded tools to fight against the big tech but instead we have literal ludites.
I have over 20 years of professional coding experience and I use Claude these days. Sure it makes mistakes and can write bad code but I’m not an idiot, I ran teams of dozens of engineers underneath me - I can handle a bot and fix it’s mistake. The maintainer of Lutris can probably too.
All I’m saying that this anti-ai mentality is fucking stupid and anyone who engages with it in such a binary way is fucking stupid too.
First off, the luddites were right back in the day.
Second, just because you can use something effectively doesn’t make it good in general.
There are people who can have multiple credit cards for years and never carry a balance, or walk into a casino with $100, lose it all, and quit right there.
But most people can’t, and being one of the few that can doesn’t make it safe or good overall. Credit cards and casinos are still predatory and a detriment overall to the population.
I puffed a few cigs back in high school and college to see what all the fuss was about, didn’t get it. But I personally know multiple people that did the same thing, got hooked almost immediately, and took years to quit. Cigarettes are bad for you and highly addictive. The fact that they never hooked me doesn’t change that.
Third, I’m not sure how using LLMs is “fighting against big tech.” unless you just mean using their tools to build FOSS more effectively.
But that’s the whole point, it’s not at all clear that LLMs enable that for most people. In fact, there’s already quite a bit of data to indicate the opposite. That using LLMs results in worse code, worse development of skills like critical reasoning and problem solving, worse productivity, worse security, and undeniable environmental harm.
This isn’t just about anti-ai mentality, it’s the “I deleted the authorship so you can’t fork it out or prove that it’s causing issues”. This kind of insanity has been happening repeatedly on that project, it’s time to let it go and find new solutions.
This is clearly a response to the luddites? No?
It was a response to
All I’m saying that this anti-ai mentality is fucking stupid and anyone who engages with it in such a binary way is fucking stupid too.
So it’s all compromised, gotcha.
Lol
I think there is a very practical reason to attribute AI contributions: AI models are improving in ability. Being able to know when and what contributed the code, would allow people to more easily deploy newer AI to examine the work of previous AI, to improve or replace it. Plus, some AI will likely be specialized in specific domains, so you wouldn’t want different agents from stepping on each other’s toes. Something oriented around GUI design, probably shouldn’t be handling graphics optimizations.
This removal of authorship will just make things more difficult in the long run.
There are people who would share their pornhub activity before they share what they coded with AI.
I’m kind of torn on this, because on the one side I can see the developer’s troubles. If they have 30 years of experience and they considered the impact of using it they will most likely know how to use it properly and ethically. Indeed many of the issues people have with AI are a kind of redirected anger, when really they are issues with capitalism, incompetency, or digital illiteracy. And the person posting the issue seems purely there to fan that flame rather than actually contribute. Something maintainers could use just as little as slop authored PRs.
But on the other hand, being open about the usage is a must. It’s the price to pay for going against the grain. If your ideals and means are pure, they should be defendable and scrutinizable to reasonable people, and there should be no issue with that in the long term. Hiding the usage will create doubt about authorship, and make defenses harder to point at, while it won’t stop the horde.
they will most likely know how to use it properly and ethically
I’d argue that ethical use is not possible:
- Models are trained on stolen/misappropriated/misused data
- Training involves psychologically harmful work from ghost workers
- Those services runs on infrastructure that no one wants around, and wastefully contributes to climate change/global warming
Yeah what rubs me wrong is that they went out of their way to hide it and are proud of it
It’s at times like this I like to point out examples like surgeons Ben Carr(?) and Dr. Oz as counter examples that you can be very knowledgeable about something but also very unwise or morally bankrupt it simultaneously.









