• KSP Atlas@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    After getting my head around the basics of the way LLMs work I thought “people rely on this for information?”, the model seems ok for tasks like summarisation though

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      17 days ago

      It’s good for coding if you train it on your own code base. Not great for writing very complex code since the models tend to hallucinate, but it’s great for common patterns, and straightforward questions specific to your code base that can be answered based on existing code (eg “how do I load a user’s most recent order given their email address?”)

      • brbposting@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        17 days ago

        It’s wild when you only know how to use SELECT in SQL, but after a dollar worth of prompting and 10 minutes of your time, you can have a significantly complex query you end up using multiple times a week.

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      I don’t love it for summarization. If I read a summary, my takeaway may be inaccurate.

      Brainstorming is incredible. And revision suggestions. And drafting tedious responses, reformatting, parsing.

      In all cases, nothing gets attributed to me unless I read every word and am in a position to verify the output. And I internalize nothing directly, besides philosophy or something. Sure can be an amazing starting point especially compared to a blank page.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      the model seems ok for tasks like summarisation though

      That and retrieval and the business use cases so far, but even then only if the results can be wrong somewhat frequently.