That’s cool, didn’t know AI models where a thing in those days. Are they comparable (maybe more crude?) to nowadays tech? Like, did they use machineearning? As far as I remember there were not much dedicated AI accelerating hardware pieces. Maybe a beefy GPU for neural network purposes? Interesting though
Oh and to answer this, specifically, Nvidia has been used in ML research forever. It goes back to 2008 and stuff like the desktop GTX 280. Maybe earlier.
Most “AI accelerators” are basically the same thing these days: overgrown desktop GPUs. They have pixel shaders, ROPs, video encoders and everything, with the one partial exception beinf the AMD MI300X and beyond (which are missing ROPs).
We didn’t call them AI because they weren’t (and aren’t) intelligent, but marketing companies eventually realized there were trillions of dollars to be made convincing people they were intelligent and created models explicitly designed to convince people of things like the idea that they are intelligent and can have genuine conversations like a real human and create real art like a real human and totally aren’t just empty-headedly mimicking thousands of years of human conversation and art, and immediately used them to convince people that the models themselves were intelligent (and many other things besides). Given that marketing and advertising literally exist to convince people of various things and have become exceedingly good at it, it’s really a brilliant business move and seems to be working great for them.
I didn’t know. Are you somewhat informed about the history of models? I’d love to hear it from you instead of a random crypto bros’ LLM summary. Thanks!
Yann Lecun gave us convolutional neural networks (CNNs) in 1998. These are the models that are used for pretty much all specialized computer vision tasks even today. TinyEye came into existence ten years later in 2008. I can’t tell you if they used CNNs, but they were certainly available.
Machine learning has been a field for years, as others said, yeah, but Wikipedia would be a better expansion of the topic. In a nutshell, it’s largely about predicting outputs based on trained input examples.
It doesn’t have to be text. For example, astronmers use it to find certain kinds of objects in raw data feeds. Object recognition (identifying things in pictures with little bounding boxes) is an old art at this point. Series prediction models are a thing, languagetool uses a tiny model to detect commonly confused words for grammar checking. And yes, image hashing is another, though not entirely machine learning based. IDK what Tineye does in their backend, but there are some more “oldschool” approaches using more traditional programming techniques, generating signatures for images that can be easily compared in a huge database.
Seperately, image similarity metrics (like lpips or SSIM) measure the difference between two images as a number (where, say, 1 would be a perfect mach and 0 totally unrelated) are common components in machine learning pipelines. So are text embedding models, which do the same with text.
LLMs in particular have an interesting history, going back to (If I even remember the name correctly) BERT in Google’s labs. There were also tiny LLMS people did run on personal GPUs before ChatGPT was ever a thing, like the infamous Pygmalion 6B roleplaying bot, heh, a finetune of GPT-J 6B. They were primitive and dumb, but it felt like witchcraft back then (before AI Bros poisoned the well).
They had the AI models of those days.
That’s cool, didn’t know AI models where a thing in those days. Are they comparable (maybe more crude?) to nowadays tech? Like, did they use machineearning? As far as I remember there were not much dedicated AI accelerating hardware pieces. Maybe a beefy GPU for neural network purposes? Interesting though
Oh and to answer this, specifically, Nvidia has been used in ML research forever. It goes back to 2008 and stuff like the desktop GTX 280. Maybe earlier.
Most “AI accelerators” are basically the same thing these days: overgrown desktop GPUs. They have pixel shaders, ROPs, video encoders and everything, with the one partial exception beinf the AMD MI300X and beyond (which are missing ROPs).
CPUs were used, too. In fact, Intel made specific server SKUs for giant AI users like Facebook. See: https://www.servethehome.com/facebook-introduces-next-gen-cooper-lake-intel-xeon-platforms/
We didn’t call them AI because they weren’t (and aren’t) intelligent, but marketing companies eventually realized there were trillions of dollars to be made convincing people they were intelligent and created models explicitly designed to convince people of things like the idea that they are intelligent and can have genuine conversations like a real human and create real art like a real human and totally aren’t just empty-headedly mimicking thousands of years of human conversation and art, and immediately used them to convince people that the models themselves were intelligent (and many other things besides). Given that marketing and advertising literally exist to convince people of various things and have become exceedingly good at it, it’s really a brilliant business move and seems to be working great for them.
Models were a thing even some 30 or 40 years ago. Processing power makes most of the difference today: it allows larger models and quicker results.
I didn’t know. Are you somewhat informed about the history of models? I’d love to hear it from you instead of a random crypto bros’ LLM summary. Thanks!
Yann Lecun gave us convolutional neural networks (CNNs) in 1998. These are the models that are used for pretty much all specialized computer vision tasks even today. TinyEye came into existence ten years later in 2008. I can’t tell you if they used CNNs, but they were certainly available.
I don’t remember too much tbh, just that we heard about the theory at university and tried out some of the mathematical methods. They were tiresome ;)
Today I would recommend to start your studies on the wikipedia pages about Markov models and about machine learning.
Machine learning has been a field for years, as others said, yeah, but Wikipedia would be a better expansion of the topic. In a nutshell, it’s largely about predicting outputs based on trained input examples.
It doesn’t have to be text. For example, astronmers use it to find certain kinds of objects in raw data feeds. Object recognition (identifying things in pictures with little bounding boxes) is an old art at this point. Series prediction models are a thing, languagetool uses a tiny model to detect commonly confused words for grammar checking. And yes, image hashing is another, though not entirely machine learning based. IDK what Tineye does in their backend, but there are some more “oldschool” approaches using more traditional programming techniques, generating signatures for images that can be easily compared in a huge database.
Seperately, image similarity metrics (like lpips or SSIM) measure the difference between two images as a number (where, say, 1 would be a perfect mach and 0 totally unrelated) are common components in machine learning pipelines. So are text embedding models, which do the same with text.
LLMs in particular have an interesting history, going back to (If I even remember the name correctly) BERT in Google’s labs. There were also tiny LLMS people did run on personal GPUs before ChatGPT was ever a thing, like the infamous Pygmalion 6B roleplaying bot, heh, a finetune of GPT-J 6B. They were primitive and dumb, but it felt like witchcraft back then (before AI Bros poisoned the well).
As a transmillenial student of AI / ML, great write-up.