- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
This is the best summary I could come up with:
Over the last few years, Mozilla also started making startup investments, including into Mastodon’s client Mammoth, for example, and acquired Fakespot, a website and browser extension that helps users identify fake reviews.
Indeed, when Mozilla launched its annual report a few weeks ago, it also used that moment to add a number of new members to its board — the majority of which focus on AI.
Surman told me that the leadership team had been planning these efforts for almost a year, but as public interest in AI grew, he “pushed it out of the door.” But then Draief pretty much moved it right back into stealth mode to focus on what to do next.
Surman believes that no matter the details of that, though, the overall principles of transparency and freedom to study the code, modify it and redistribute it will remain key.
The licenses aren’t perfect and we are going to do a bunch of work in the first half of next year with some of the other open source projects around clarifying some of those definitions and giving people some mental models.”
Then, he noted, when the smartphone arrived, there were a few smaller projects that aimed to create alternatives, including Mozilla (and at its core, Android is obviously also open source, even as Google and others have built walled gardens around the actual user experience).
The original article contains 1,252 words, the summary contains 229 words. Saved 82%. I’m a bot and I’m open source!
I’m afraid that if AI ends up being just a fad, Mozilla won’t be able to recover from this bet.
I don’t think AI will be a fad in the same way blockchain/crypto-currency was. I certainly think there’s somewhat of a hype bubble surrounding AI, though - it’s the hot, new buzzword that a lot of companies are mentioning to bring investors on board. “We’re planning to use some kind of AI in some way in the future (but we don’t know how yet). Make cheques out to ________ please”
I do think AI does have actual, practical uses, though, unlike blockchain which always came off as a “solution looking for a problem”. Like, I’m a fairly normal person and I’ve found good uses for AI already in asking it various questions where it gives better answers than search engines, in writing code for me (I can’t write code myself), etc. Whereas I’ve never touched anything to do with crypto.
AI feels like a space that will continue to grow for years, and that will be implemented into more and more parts of society. The hype will die down somewhat, but I don’t see AI going away.
I’ve found good uses for AI already in asking it various questions where it gives better answers than search engines, in writing code for me (I can’t write code myself), etc.
I’d caution against using it for these things due to its tendency to make stuff up. I’ve tried using ChatGPT for both, but in my experience if I can’t find something on google myself, ChatGPT will claim to know the answer but give me something that just isn’t true. For coding it can do basic things, but if I wanna use a library or some other more granular task it’ll do something like make up a function call that doesn’t exist. The worst part is that it looks right, so I used to waste time trying to figure out why it doesn’t work for me, when it turns out it doesn’t work for anybody. For factual information, I had to correct a friend who gave me fake stats on airline reliability to help me make a flight choice - he got them from GPT 4 and while the numbers looked right, they deviated from other info. In general you never want to trust any specific numbers from LLMs because they’re trained to look right rather than to actually be right.
For me LLMs have proven most useful for things like brainstorming or coming up with an image I can use for illustration purposes. Because those things don’t need to be exactly right.
The thing is, AI has been around for a really long time and has lots of established use-cases. Unfortunately, none of them are to do with generative language/image models. AI is mainly used for classifying data as part of data science. But data science is extremely unsexy to the average person, so for them AI has become synonymous with the ChatGPTs and DALLEs of the world.
Don’t worry, once the hype fades, we can start calling LLMs “machine learning” again
the ChatGPTs and DALLEs of the world.
Aka my least favorite part of every web and image search respectively
If it was a fad then why does the crypto currency simply doesn’t die? Because I’m waiting on that for some time and nothing really happens.
Extinction, if we’re lucky enough.
I ask once again: who asked for this? Can anybody point to a large community outcry for Mozilla to invest in this? They only have finite time and money, after all, and it appears they are still maintaining their last shiny object, VR worlds.
I didn’t ask for it, but I’m lowkey happy to have them in this. I imagine, in a few years from now, all the start-ups will have run out of money or been acquired, and as per the usual, only big tech companies remain.
Traditional search engines will basically be dead, completely swamped with AI-generated spam. And even non-techies will generally depend on generative AIs for information and communication.
If those are exclusively controlled by big tech, we’ll have tons of censorship (e.g. if you want to export an LLM to China, it has to pretend to not know about the Uyghurs) and just generally no control.I don’t expect Mozilla to save the world here, they’re too small for that. But they’re already providing useful tools, raising the entrypoint for independent devs.
That’s a huge “if” for how the future pans out, though. If Mozilla is gambling correctly, then their investment might pay off for the average person.
Unfortunately, 2023 marked their purchase of private user data, and they gave themselves the right to purchase and resell it – location data, browser history, usernames, full profiles.
So if Mozilla wants to be the “privacy AI” company, they skipped the privacy.