If your IP (and possible your browser) looks “suspicious” or has been used by other users before, you need to add additional information for registration on gitlab.com, which includes your mobile phone number and possibly credit card information. Since it is not possible to contribute or even report issues on open source projects without doing so, I do not think any open source project should use this service until they change that.

Screenshot: https://i.ibb.co/XsfcfHf/gitlab.png

  • TimeSquirrel@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    the beginners who try it are constantly asking questions about why their generated code doesn’t work

    Because it ain’t here to generate all their code for them. It’s a glorified autocomplete and suggestion engine. When are people gonna get this? (not you, just in general)

    I use CoPilot myself, but if you have absolutely no idea what you’re doing yourself, you and CoPilot will both quickly hit a dead end together. Books and research using your meatbrain are still very much needed.

    • devfuuu@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      It’s not in the interest of all the techbros to sell the new age AIshit as something less that can only do such small thing. They need to hype the shit out of it to get all the crazy investors money that understand nothing about it but only see AI buzzwords everywhere and need to go for it now because of FOMO.

      It’s only gonna get much worse before it is toned down to appropriate usage.

    • DrQuint@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      Don’t even need to make it about code. I once asked what a term meant in a page full of a certain well known FOSS application’s benchmarks page. It gave me a lot of garbage that was unrelated because it made an assumption about the term, exactly the assumption I was trying to avoid. I try to deviate it away from that, and it fails to say anything coherent and then loops back and gives that initial attempt as the answer again. I was stuck unable from stopping it from hallucinating.

      How? Why?

      Basically, it was information you could only find by looking at the github code, and it was pretty straightforward - but the LLM sees “benchmark” and it must therefore make a bajillion assumptions.

      Even if asked not to.

      I have a conclusion to make. It does do the code thing too, and it is directly related. Once asked about a library, and it found a post where someone was ASKING if XYZ was what a piece of code was for - and it gave it out as if it was the answer. It wasn’t. And this is the root of the problem:

      AI’s never say “I don’t know”.

      It must ALWAYS know. It must ALWAYS assume something, anything, because not knowing is a crime and it won’t commit it.

      And that makes them shit.