DeepSeek launched a free, open-source large language model in late December, claiming it was developed in just two months at a cost of under $6 million.
Even o1 (which AFAIK is roughly on par with R1-671B) wasn’t really helpful for me. I just need often (actually all the time) correct answers to complex problems and LLMs aren’t just capable to deliver this.
I still need to try it out whether it’s possible to train it on my/our codebase, such that it’s at least possible to use as something like Github copilot (which I also don’t use, because it just isn’t reliable enough, and too often generates bugs). Also I’m a fast typer, until the answer is there and I need to parse/read/understand the code, I already have written a better version.
You’re just trolling aren’t you? Have you used AI for a longer time while coding and then tried without for some time?
I currently don’t miss it… Keep in mind that you still have to check whether all the code is correct etc. writing code isn’t the thing that usually takes that much time for me… It’s debugging, and finding architecturally sound and good solutions for the problem. And AI is definitely not good at that (even if you’re not that experienced).
As you’re being unkind all the time, let me be unkind as well :)
A calculator also isn’t much help, if the person operating it fucks up. Maybe the problem in your scenario isn’t the AI.
If you can effectively use AI for your problems, maybe they’re too repetitive, and actually just dumb boilerplate.
I rather like to solve problems that require actual intelligence (e.g. do research, solve math problems, think about software architecture, solve problems efficiently), and don’t even want to deal with problems that require me to write a lot of repetitive code, which AI may be (and often is not) of help with.
I have yet to see efficient generated Rust code that autovectorizes well, without a lot of allocs etc. I always get triggered by the insanely bad code-quality of the AI that just doesn’t even really understand what allocations are… Arghh I could go on…
What’s this “if” nonsense? I loaded up a light model of it, and already have put it to work.
Have you actually read my text wall?
Even o1 (which AFAIK is roughly on par with R1-671B) wasn’t really helpful for me. I just need often (actually all the time) correct answers to complex problems and LLMs aren’t just capable to deliver this.
I still need to try it out whether it’s possible to train it on my/our codebase, such that it’s at least possible to use as something like Github copilot (which I also don’t use, because it just isn’t reliable enough, and too often generates bugs). Also I’m a fast typer, until the answer is there and I need to parse/read/understand the code, I already have written a better version.
Ahh. It’s overconfident neckbeard stuff then.
You’re just trolling aren’t you? Have you used AI for a longer time while coding and then tried without for some time? I currently don’t miss it… Keep in mind that you still have to check whether all the code is correct etc. writing code isn’t the thing that usually takes that much time for me… It’s debugging, and finding architecturally sound and good solutions for the problem. And AI is definitely not good at that (even if you’re not that experienced).
Yes, I have tested that use case multiple times. It performs well enough.
A calculator also isn’t much help, if the person operating it fucks up. Maybe the problem in your scenario isn’t the AI.
As you’re being unkind all the time, let me be unkind as well :)
If you can effectively use AI for your problems, maybe they’re too repetitive, and actually just dumb boilerplate.
I rather like to solve problems that require actual intelligence (e.g. do research, solve math problems, think about software architecture, solve problems efficiently), and don’t even want to deal with problems that require me to write a lot of repetitive code, which AI may be (and often is not) of help with.
I have yet to see efficient generated Rust code that autovectorizes well, without a lot of allocs etc. I always get triggered by the insanely bad code-quality of the AI that just doesn’t even really understand what allocations are… Arghh I could go on…