ETH Zurich and EPFL will release a large language model (LLM) developed on public infrastructure. Trained on the “Alps” supercomputer at the Swiss National Supercomputing Centre (CSCS), the new LLM marks a milestone in open-source AI and multilingual excellence.

  • In late summer 2025, a publicly developed large language model (LLM) will be released — co-created by researchers at EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS).
  • This LLM will be fully open: This openness is designed to support broad adoption and foster innovation across science, society, and industry.
  • A defining feature of the model is its multilingual fluency in over 1,000 languages.
  • danzania@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    I’m sure the community will find something to hate about this as well, since this isn’t an article about an LLM failing at something.

    • Paid in cheese@lemmings.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      According to the article, they’ve even addressed my environmental concerns. Since it’s created by universities, I don’t think we’ll even have this shoved down our throats all the time.

      I doubt whether it will be more useful than any other general LLM so far but hate it? Nah.

    • thedruid@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      Llms are useful for inspiration, light research, etc.

      They should never be used as part of a finished product or as the main scaffolding.

      • Plebcouncilman@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        Honestly they are pretty good for research too. You can’t imagine the amount of obscure shit that my ChatGPT has surfaced when I bounce ideas on it. But yea it’s terrible in finished products, I think everyone knows that and in a year or two if they don’t improve I expect we will be back to shoving it behind the scenes as had been done before ChatGPT. It’s for the best.

        • thedruid@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 day ago

          That’s not research. That’s simply surfacing tidbits it found on the net the happen to be true

          .I’ve asked many questions of many llms in my chosen areas of interest and modest expertise , seeking more than basic knowledge( which it often surprisingly lacks ) it always has at least one error. Often so subtle it goes on noticed until it’s too late.

          • Plebcouncilman@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            20 hours ago

            So what you’re saying is that it’s good for research, because you can’t research what you don’t know about.

            It’s good for giving starting points which is exactly what I meant.

            Next time I’ll write a dissertation with hyper specifics because it seems it’s necessary every time LLMs are involved as there’s always someone looking to nitpick the statements.

    • cabbage@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 days ago

      Gigantic hater of all things LLM or “AI” here.

      The only genuine contribution I can think of that LLMs have made to society is their translation capabilities. So even I can see how a fully open source model with “multilingual fluency in over 1,000 languages” could be potentially useful.

      And even if it is all a scam, if this prevents people from sending money to China or the US as they are falling for the scam, I guess that’s also a good thing.

      Could I find something to hate about it? Oh yeah, most certainly! :)

      • ☂️-@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        i hear there are cool advances in medicine, engineering and such. i imagine techbros have an exponentially bigger budget, though.

        • cabbage@piefed.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 day ago

          Usually when I see this it’s using other machine learning approaches than LLM, and the researchers behind it are usually very careful not to use the term AI, as they are fully aware that this is not what they are doing.

          There’s huge potential in machine learning, but LLMs are very little more than bullshit generators, and generative AI is theft producing soulless garbage. LLMs are widely employed because they look impressive, but for anything that requires substance machine learning methods that have been around for years tend to perform better.

          If you can identify cancer in x-rays using machine learning that’s awesome, but that’s very seperate from the AI hype machine that is currently running wild.

          • ☂️-@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            22 hours ago

            to be fair, the LLMs they use for chatbots and stolen pics generator are not AI either.

            • cabbage@piefed.social
              link
              fedilink
              English
              arrow-up
              0
              ·
              15 hours ago

              Yeah, I just find it to be a great rule of thumb. Those who understand what they are doing will be aware that they are not dealing with AI, those who jump to label it as such are usually bullshit artists.