taste of his own medicine
Eat my SaaS
Two days later…
Bonus points if the attackers use ai to script their attacks, too. We can fully automate the SaaS cycle!
Someone really should’ve replied with
My attack was built with Curson
That is the real dead Internet theory: everything from production to malicious actors to end users are all ai scripts wasting electricity and hardware resources for the benefit of no human.
That would only happen if we give power to our ai assistants to buy things on our behalf, and manage our budgets. They will decide among themselves who needs what and the money will flow to billionaires pockets without any human intervention. If humans go far enough, not even rich people would be rich, as trust funds, stock portfolios would operate under ai. If the ai achieves singularity with that level of control, we are all basically in spectator mode.
The Internet will continue to function just fine, just as it has for 50 years. It’s the World Wide Web that is on fire. Pretty much has been since a bunch of people who don’t understand what Web 2.0 means decided they were going to start doing “Web 3.0” stuff.
The Internet will continue to function just fine, just as it has for 50 years.
Sounds of intercontinental data cables being sliced
Seems like a fitting end to the internet, imo. Or the recipe for the Singularity.
This is the opposite of the singularity
It is a singularity, in the sense that it is an infinitely escalating level of suck.
Suckularity?
I never said it was going to be any good!
Not only internet. Soon everybody will use AI for everything. Lawyers will use AI in court on both sides. AI will fight against AI.
they’ll find a use case any day now for realsies.
It was a time of desolation, chaos, and uncertainty. Brother pitted against brother. Babies having babies.
Then one day, from the right side of the screen, came a man. A man with a plastic rectangle.
lol thanks
I was at a coffee shop the other day and 2 lawyers were discussing how they were doing stuff with ai that they didn’t know anything about and then just to their clients.
That shit scared the hell out of me.
And everything will just keep getting worse with more and more common folk eating the hype and brainwash using these highly incorrect tools in all levels of our society everyday to make decisions about things they have no idea about.
I’m aware of an effort to get LLM AI to summarize medical reports for doctors.
Very disturbing.
The people driving it where I work tend to be the people who know the least about how computers work.
I am not a bot trust me.
This is satire / trolling for sure.
LLMs aren’t really at the point where they can spit out an entire program, including handling deployment, environments, etc. without human intervention.
If this person is ‘not technical’ they wouldn’t have been able to successfully deploy and interconnect all of the pieces needed.
The AI may have been able to spit out snippets, and those snippets may be very useful, but where it stands, it’s just not going to be able to, with no human supervision/overrides, write the software, stand up the DB, and deploy all of the services needed. With human guidance sure, but with out someone holding the AIs hand it just won’t happen (remember this person is ‘not technical’)
It’s further than you think. I spoke to someone today about and he told me it produced a basic SaaS app for him. He said that it looked surprisingly okay and the basic functionalities actually worked too. He did note that it kept using deprecated code, consistently made a few basic mistakes despite being told how to avoid it, and failed to produce nontrivial functionalies.
He did say that it used very common libraries and we hypothesized that it functioned well because a lot of relevant code could be found on GitHub and that it might function significantly worse when encountering less popular frameworks.
Still it’s quite impressive, although not surprising considering it was a matter of time before people would start to feed the feedback of an IDE back into it.
Did it provision a scalable infrastructure? Because that’s the aaS part of SaaS.
Might be satire, but I think some “products based on LLMs” (not LLMs alone) would be able to. There’s pretty impressive demos out there, but honestly haven’t tried them myself.
idk ive seen some crazy complicated stuff woven together by people who cant code. I’ve got a friend who has no job and is trying to make a living off coding while, for 15+ years being totally unable to learn coding. Some of the things they make are surprisingly complex. Tho also, and the person mentioned here may do similarly, they don’t ONLY use ai. They use Github alot too. They make nearly nothing themself, but go thru github and basically combine large chunks of code others have made with ai generated code. Somehow they do it well enough to have done things with servers, cryptocurrency, etc… all the while not knowing any coding language.
That reminds me of this comic strip…
Claude code can make something that works, but it’s kinda over engineered and really struggles to make an elegant solution that maximises code reuse - it’s the opposite of DRY.
I’m doing a personal project at the moment and used it for a few days, made good progress but it got to the point where it was just a spaghetti mess of jumbled code, and I deleted it and went back to implementing each component one at a time and then wiring them together manually.
My current workflow is basically never let them work on more than one file at a time, and build the app one component at a time, starting at the ground level and then working in, so for example:
Create base classes that things will extend, Then create an example data model class, iterate on that architecture A LOT until it’s really elegant.
Then Ive been getting it to write me a generator - not the actual code for models,
Then (level 3) we start with be UI.layer, so now we make a UI kit the app will use and reuse for different components
Then we make a UI component that will be used in a screen. I’m using flutter as an example so It would be a stateless component
We now write tests for the component
Now we do a screen, and I import each of the components.
It’s still very manual, but it’s getting better. You are still going to need a human cider, I think forever, but there are two big problems that aren’t being addressed because people are just putting their head in the sand and saying nah can’t do it, or the clown op in the post who thinks they can do it.
-
Because dogs be clownin, the public perception of programming as a career will be devalued “I’ll just make it myself!” Or like my rich engineer uncle said to me when I was doing websites professionally - a 13 year old can just make a website, why would I pay you so much to do it. THAT FUCKING SUCKS. But a similar attitude has existed from people “I’ll just hire Indians”. This is bullshit, but perception is important and it’s going to require you to justify yourself for a lot more work.
-
And this is the flip side good news. These skills you have developed - it’s is going to be SO MUCH FUCKING HARDER TO LEARN THEM. When you can just say “hey generate me an app that manages customers and follow ups” and something gets spat out, you aren’t going to investigate the grind required to work out basic shit. People will simply not get to the same level they are now.
That logic about how to scaffold and architect an app in a sensible way - USING AI TOOLS - is actually the new skillset. You need to know how to build the app, and then how to efficiently and effectively use the new tools to actually construct it. Then you need to be able to do code review for each change.
</rant>
-
Mmmmmm no, Claude definitely is. You have to know what to ask it, but I generated and entire deadman’s switch daemon written in go in like an hour with it, to see if I could.
So you did one simple program.
SaaS involves a suite of tooling and software, not just a program that you build locally.
You need at a minimum, database deployments (with scaling and redundancy) and cloud software deployments (with scaling and redundancy)
SaaS is a full stack product, not a widget you run on your local machine. You would need to deputize the AI to log into your AWS (sorry, it would need to create your AWS account) and fully provision your cloud infrastructure.
Lol they don’t need scaling and redundancy to work. They just need scaling and redundancy to avoid being sued into oblivion when they lose all their customer data.
As a full time AI hater, I fully believe that some code-specialized AI can write and maybe even deploy a full stack program, with basic input forms and CRUD, which is all you need to be a “saas”.
It’s gonna suck, and be unmaintainable, and insecure, and fragile. But I bet it could do it and it’d work for a little while.
That’s not “working saas” tho.
Its like calling hello world a “production ready CLI application”.
What makes it “working”, is that the Software part of Software as a Service, is available as a Service.
The service doesn’t have to scale to a million users. It’s still a SaaS if it has one customer with like 4 users.
Is this a pedantic argument? Yes.
Are you starting a pedantic fight about the specific definition of SaaS? Also yes.My CGI script is a SaaS.
deleted by creator
I’m skeptical. You are saying that your team has no hand in the provisioning and you deputized an AI with AWS keys and just let it run wild?
How? We try to adopt AI for dev work for years now and every time the next gen tool or model gets released it fails spectacularly at basic things. And that’s just the technical stuff, I still have no idea on how to tell it do implement our use cases as it simply does not understand the domain.
It is great at building things other have already built and it could train on but we don’t really have a use case for that.
Is the implication that he made a super insecure program and left the token for his AI thing in the code as well? Or is he actually being hacked because others are coping?
Doesn’t really matter. The important bit is he has no idea either. (It’s likely the former and he’s blaming the weirdos trying to get in)
Nobody knows. Literally nobody, including him, because he doesn’t understand the code!
rofl!
That’s fucking hilarious then.
Nah the people doing the pro bono pen testing know. At least for the frontend side and maybe some of the backend.
I’m stealing “pro bono pen testing.”
Cant steal it, if it is already pro bono :D
But the things doing the testing could be bots instead of human actors, so it may very well be that no human does in fact know.
Thought so too, but nah. Unless that bot is very intelligent and can read and humorously respond to social media posts by settings its fake domain.
Good point! Thanks for pointing that out.
Potentially both, but you don’t really have to ask to be hacked. Just put something into the public internet and automated scanning tools will start checking your service for popular vulnerabilities.
He told them which AI he used to make the entire codebase. I’d bet it’s way easier to RE the “make a full SaaS suite” prompt than it is to RE the code itself once it’s compiled.
Someone probably poked around with the AI until they found a way to abuse his SaaS
AI writes shitty code that’s full of security holes, and Leo here has probably taken zero steps to further secure his code. He broadcasts his AI written software and its open season for hackers.
Not just, but he literally advertised himself as not being technical. That seems to be just asking for an open season.
Reminds me of the days before ai assistants where people copy pasted code from forums and then you’d get quesitions like “I found this code and I know what every line does except this ‘for( int i = 0; i < 10; i ++)’ part. Is this someone using an unsupported expression?”
i <= 9
, you heathen. Next thing you’ll do isi < INT_MAX + 1
and then the shit’s steaming.If it was correct it wouldn’t have been copied into the forums lmao
I mean
i < 10
isn’t wrong as such, it’s just good practice to always use<=
because in theINT_MAX
case you have to and everything should be regular because principle of least astonishment: That10
might become a, that then might become
, each of those changes look valid in isolation but if there’s only a single
i < FOO
in your codebase you introduced a bug by spooky action at a distance. (overflow on int is undefined behaviour in C, in case anyone is wondering what the bug is).…never believe anyone who says “C is a simple language”. Their code is shoddy and full of bugs and they should be forced to write Rust for their own good.
But your case is wrong anyways because
i <= INT_MAX
will always be true, by definition. By your argument<
is actually better because it is consistent from< 0
to iterate 0 times to< INT_MAX
to iterate the maximum number of times.INT_MAX + 1
is the problem, not<
which is the standard reason to write for loops and the standard for a reason.You’re right, that’s what I get for not having written a line of C in what 15 years. Bonus challenge: write
for i in i32::MIN..=i32::MAX
in C, that is, iterate over the whole range, start and end inclusive.(I guess the
..=
might be where my confusion came from because Rust’s..
is end-exclusive and thus like<
, but also not what you want becausei32::MAX + 1
panics).
<=
makes sense if you start from 1.
I’m less knowledgeable than the OOP about this. What’s the code you quoted do?
@Moredekai@lemmy.world posted a detailed explanation of what it’s doing, but just to chime in that it’s an extremely basic part of programming. Probably a first week of class if not first day of class thing that would be taught. I haven’t done anything that could be considered programming since 2002 and took my first class as an elective in high school in 2000 but still recognize it.
It’s a standard formatted for-loop. It’s creating the integer variable i, and setting it to zero. The second part is saying “do this while i is less than 10”, and the last part is saying what to do after the loop runs once -‐ increment i by 1. Under this would be the actual stuff you want to be doing in that loop. Assuming nothing in the rest of the code is manipulating i, it’ll do this 10 times and then move on
I would also add that usually i will be used inside the code block to index locations within whatever data structures need to be accessed. Keeping track of how many times the loop has run has more utility than just making sure something is repeated 10 times.
It’s a for loop. Super basic code structure.
for( int i = 0; i < 10; i ++)
This reads as “assign an integer to the variable
I
and put a 0 in that spot. Do the following code, and once completed add 1 toI
. Repeat untilI
reaches 10.”Int
I
= 0 initiatesI
, tells the compiler it’s an integer (whole number) and assigns 0 to it all at once.I
++ can be written a few ways, but they all say “add 1 to I”I
< 10 tells it to stop at 10For tells it to loop, and starts a block which is what will actually be looping
Edits: A couple of clarifications
Ha, you fools still pay for doors and locks? My house is now 100% done with fake locks and doors, they are so much lighter and easier to install.
Wait! why am I always getting robbed lately, it can not be my fake locks and doors! It has to be weirdos online following what I do.
The difference is locks on doors truly are just security theatre in most cases.
Unless you’re the BiLock and it takes the LockPickingLawyer 3 minutes to pick it open.
To be fair, it’s both.
hahahahahahahahahahahaha
But I thought vibe coding was good actually 😂
Vibe coding is a hilarious term for this too. As if it’s not just letting AI write your code.
Ok, but what did they try to do as a SaaS?
Money.
Devils advocate, not my actual opinion; if you can make a Thing that people will pay to use, easily and without domain specific knowledge, why would you not? It may hit issues at some point but by them you’ve already got ARR and might be able to sell it.
If you started from first principles and made a car or, in this case, told an flailing intelligence precursor to make a car, how long would it take for it to create ABS? Seatbelts? Airbags? Reinforced fuel tanks? Firewalls? Collision avoidance? OBD ports? Handsfree kits? Side impact bars? Cupholders? Those are things created as a result of problems that Karl Benz couldn’t have conceived of, let alone solve.
Experts don’t just have skills, they have experience. The more esoteric the challenge, the more important that experience is. Without that experience you’ll very quickly find your product fails due to long-solved problems leaving you - and your customers - in the position of being exposed dangers that a reasonable person would conclude shouldn’t exist.
Yeh, arguably and to a limited extent, the problems he’s having now aren’t the result of the decision to use AI to make his product so much as the decision to tell people about that and people deliberately attempting to sabotage it. I’m careful to qualify that though because the self evident flaw in his plan even if it only surfaced in a rather extreme scenario, is that they lack the domain specific knowledge to actually make his product work as soon as anything becomes more complicated than just collecting the money. Evidently there was more to this venture than just the building of the software that was necessary to for it to be a viable service. Much like if you consider yourself the ideas man and paid a programmer to engineer the product for you and then fired them straight after without hiring anyone to maintain it or keep the infrastructure going or provide support for your clients and then claimed you ‘built’ the product, you’d be in a similar scenario not long after your first paying customer finds out the hard way that you don’t actually know anything about your own service that you willingly took money for and can’t actually provide service part of the Software as a Service
Was listening to my go-to podcast during morning walkies with my dog. They brought up an example where some couple was using ShatGPT as a couple’s therapist, and what a great idea that was. Talking about how one of the podcasters has more of a friend like relationship to “their” GPT.
I usually find this podcast quite entertaining, but this just got me depressed.
ChatGPT is by the same company that stole Scarlett Johansson’s voice. The same vein of companies that thinks it’s perfectly okay to pirate 81 terabytes of books, despite definitely being able to afford paying the authors. I don’t see a reality where it’s ethical or indicative of good judgement to trust a product from any of these companies with information.
I agree with you, but I do wish a lot of conservatives used chatGPT or other AI’s more. It, at the very least, will tell them all the batshit stuff they believe is wrong and clear up a lot of the blatant misinformation. With time, will more batshit AI’s be released to reinforce their current ideas? Yea. But ChatGPT is trained on enough (granted, stolen) data that it isn’t prone to retelling the conspiracy theories. Sure, it will lie to you and make shit up when you get into niche technical subjects, or ask it to do basic counting, but it certainly wouldn’t say Ukraine started the war.
It will even agree that AIs shouldn’t controlled by oligarchic tech monopolies and should instead be distributed freely and fairly for the public good, but the international system of nation states competing against each other militarily and economically prevents this. But maybe it will agree to the opposite of that too, I didn’t try asking.
The increasing use of AI is horrifying. Stop playing Frankenstein! Quit creating thinking beings and using them as slaves.
“Come try my software! I’m an idiot, so I didn’t write it and have no idea how it works, but you can pay for it.”
to
“🎵How could this happen to meeeeee🎵”