Developer fighting 502s from Lemmys Servers.

  • 1 Post
  • 26 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle







  • Is it for self-host ppl too?

    In theory not an issue. I use Supabase, which you can self host as well.

    You can also self host the Mistral Client, but not Gemini. However, I am planning to move away from Gemini towards a more open solution which would also support self hosting, or in-browser AI.

    I am looking for OIDC, S3 and PgSQL

    Since I use Supabase, it runs on PgSQL and Supabase Storage, which is just an Adapter to AWS S3 - or any S3, really. For Auth, I use Supabase Auth which uses OAuth 2.0, that’s the same as OIDC right?


  • Thanks. My general strategy regarding GenAI and reducing the amount of hallucinations is by not giving it the task to make stuff up, but to just work on existing text - that’s why I’m not allowing users to create content without source material.

    However, LLMs will be LLMs and I’ve been testing it out a lot and found already multiple hallucinations. I built in a reporting system, although only reporting stuff works right now, not viewing reported questions.

    That’s my short term plan to get a good content quality, at least. I also want to move away from Vercel AI & Gemini to a Langchain Agent system or Graph maybe, which will increase the output Quality.

    Maybe in some parallel Universe this really takes off and many people work on high quality Courses together…






  • Thanks, haha. I’d love develop a Native App for it too but this is a zero-budget Project (aside from the Domain). PlayStore has a one-time fee so that’s 25€ for Android + 8€/Month for the IOS AppStore just to have the App on there.

    In theory, I could just have a downloadable .apk for Android to circumvent the fee but most people don’t want to install a random .apk from the internet. And I’m not developing a Native App for like 3 people excluding myself (I’m an iPhone user).

    Soo, yeah that’ll probably not happen :(.







  • I use Gemini, which supports PDF File uploads, combined with structured outputs to generate Course Sections, Levels & Question JSON.

    When you upload a PDF, it first gets uploaded to a S3 Database directly from the Browser, which then sends the Filename and other data to the Server. The Server then downloads that Document from the S3 and sends it to Gemini, which then streams JSON back to the Browser. After that, the PDF is permanently deleted from the S3.

    Data Privacy wise, I wouldn’t upload anything sensitive since idk what Google does with PDFs uploaded to Gemini.

    The Prompts are in English, so the output language is English as well. However, I actually only tested it with German Lecture PDFs myself.

    So, yes, it probably works with any language that Gemini supports.

    Here is the Source Code for the core function for this feature:

    export async function createLevelFromDocument(
        { docName, apiKey, numLevels, courseSectionTitle, courseSectionDescription }: 
        { docName: string, apiKey: string, numLevels: number, courseSectionTitle: string, courseSectionDescription: string }) 
        {
        
        const hasCourseSection = courseSectionTitle.length > 0 && courseSectionDescription.length > 0;
    
        // Step 1: Download the PDF and get a buffer from it
        const blob = await downloadObject({ filename: docName, path: "/", bucketName: "documents" });
        const arrayBuffer = await blob.arrayBuffer();
        
        // Step 2: call the model and pass the PDF
        //const openai = createOpenAI({ apiKey: apiKey });
        const gooogle = createGoogleGenerativeAI({ apiKey: apiKey });
    
        const courseSectionsPrompt = createLevelPrompt({ hasCourseSection, title: courseSectionTitle, description: courseSectionDescription });
        
        const isPDF = docName.endsWith(".pdf");
    
        const content: UserContent = [];
    
        if(isPDF) {
            content.push(pdfUserMessage(numLevels, courseSectionsPrompt) as any);
            content.push(pdfAttatchment(arrayBuffer) as any);
        } else {
            const html = await blob.text();
            content.push(htmlUserMessage(numLevels, courseSectionsPrompt, html) as any);
        }
    
        const result = await streamObject({ 
            model: gooogle("gemini-1.5-flash"),
            schema: multipleLevelSchema,
            messages: [
                {
                    role: "user",
                    content: content
                }
            ]
        })
        
    
        return result;
    }