• 2 Posts
  • 56 Comments
Joined 1 year ago
cake
Cake day: June 24th, 2023

help-circle




  • Left side: Black mirror S01E02 “fifteen million merits” . A guy tries to “break the system” but this backfires and his critic that was supposed to change people’s minds is absorbed by it and turned into an entertainment product

    Right side: “Being ugly : My Experience” A youtube video of a guy explaining his experience being ugly













  • egeres@lemmy.worldtoTechnology@lemmy.worldAre We in an AI Bubble?
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    I’m using LLMs to parse and organize information in my file directory, turning bank receipts into json files, I automatically rename downloaded movies into a more legible format I prefer, I summarize clickbaity-youtube-videos, I use copilot on vscode to code much faster, chatGPT all the time to discover new libraries and cut fast through boilerplate, I have a personal assistant that has access to a lot of metrics about my life: meditation streak, when I do exercise, the status of my system etc and helps me make decisions…

    I don’t know about you but I feel like I’m living in an age of wonder

    I’m not sure what to say about the prompts, I feel like I’m integrating AI in my systems to automate mundane stuff and oversee more information, I think one should be paid for the work and value produced






  • People get very confused about this. Pre-training “ChatGPT” (or any transformer model) with “internet shitposting text” doesn’t cause them to reply with garbage comments, bad alignment does. Google seems to have implemented no frameworks to prevent hallucinations whatsoever and the RLHF/DPO applied seems to be lacking. But this is not “problem with training on the entire web”. You can pre-train a model exclusively on a 4-chan database that with the right finetuning you would see a perfectly healthy and harmless model. Actually, it’s not bad to have “shitposting” or “toxic” text in the pre-training because that gives the model an ability to identify it and understand it

    If so, the “problem with training on the entire web” is that we would be drinking from a poisoned well, AI-generated text has a very different statistical distribution from the one users have, which would degrade the quality of subsequent models. Proof of this can be seen with the RedPajama dataset, which improves the scores on trained models simply because it has less duplicated information and is a more dense dataset: https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama