And https://mammouth.ai/ (France), though they’re more a “middleman” for various providers (including providers serving open-weights models)
And of course you can still run models locally with LLM hosts like https://github.com/ggml-org/llama.cpp (there are hundreds of derivatives, but llama.cpp is the OG/underlying library for most of them). A decent gaming PC can now run local LLMs on par with SOTA proprietary models from 6-12 months ago (qwen3.6 is a beast). https://old.reddit.com/r/LocalLLaMA/ is a decent subreddit for news and discussions about this, I didn’t find a real equivalent on lemmy.
There is https://www.infomaniak.com/en/euria (Switzerland)
And https://mammouth.ai/ (France), though they’re more a “middleman” for various providers (including providers serving open-weights models)
And of course you can still run models locally with LLM hosts like https://github.com/ggml-org/llama.cpp (there are hundreds of derivatives, but llama.cpp is the OG/underlying library for most of them). A decent gaming PC can now run local LLMs on par with SOTA proprietary models from 6-12 months ago (qwen3.6 is a beast). https://old.reddit.com/r/LocalLLaMA/ is a decent subreddit for news and discussions about this, I didn’t find a real equivalent on lemmy.