• 0 Posts
  • 122 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • This so very much. I’ve been saying it since 2020. People who think the big corporations (even the ones that use AI), aren’t playing both sides of this issue from the very beginning just aren’t paying attention.

    It’s in their interest to have those positive to AI defend them by association by energizing those negative to AI to take on an “us vs them” mentality, and the other way around as well. It’s the classic divide and conquer.

    Because if people refuse to talk to each other about it in good faith, and refuse to treat each other with respect, learn where they’re coming from or why they hold such opinions, you can keep them fighting amongst themselves, instead of banding together and demanding realistic, and fair policies in regards to AI. This is why bad faith arguments and positions must be shot down on both the side you agree with and the one you disagree with.


  • Can I add 4. the integrated video downloader actually downloads videos, in whatever format you would want, and with no internet connection required to watch them. This to me is still the biggest scam ‘feature’ of Youtube Premium. You can ‘’‘download’‘’ videos, but not as eg. an mp4, but as an encrypted file only playable inside the Youtube app, and only if you connected to the internet in the last couple of days can you play it.

    That’s not downloading, that’s just jacking my disk space to avoid buffering the video from Youtube’s servers. That’s not a feature, that’s me paying for Youtube’s benefit.

    I cancelled and haven’t paid for Premium in years because of it. When someone scams me out of features I paid for, I don’t reward that shit until they either stop lying in their feature list, or actually start delivering.


  • It really depends. There’s some good uses, but it requires careful consideration and understanding of what the technology can actually provide. And if for your use case there isnt anything, it’s just not what you should use.

    Most if not all of the bigger companies that push it dont really try to use it for those purposes, but instead treat it as the next big thing that nobody quite understands, building mostly on hype. But smaller companies and open source initiatives indeed try to make the good uses more accessible and less objectionable.

    There’s plenty of cases where people do nifty things that have positive outcomes. Researchers using it for pattern recognition, scambait chatbots, creative projects that try to make use of the characteristics of AI different from human creations, etc.

    I like to keep an open mind as to what people come up with, rather than dismissing it outright when AI is involved. Although hailing it as an AI product is a red flag for me if thats all thats advertised.


  • ClamDrinker@lemmy.worldtoMemes@sopuli.xyzwho are you?
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    2 months ago

    It also very much depends on your country, food authority, and retailer. Some food authorities have stricter categories for very perishable foods where unless it has gone very bad, you can’t see it’s not suitable for consumption anymore, eg. meat and vegetable. And while the producer has an incentive to encourage waste, the retailer has the incentive to reduce it, as you typically can’t sell items to consumers that are no longer within date (Again, depending on your location). If an item is unreasonably often thrown out by the retailer, that leads to consequences in the deals being made between the retailer and the producer, which pushes the producer not to be too inaccurate either.


  • It can’t simultaneously be super easy and bad, yet also a massive propaganda tool. You can definitely dislike it for legitimate reasons though. I’m not trying to anger you or something, but if you know about #1, you should also know why it’s a good tool for misinformation. Or you might, as I proposed, be part of the group that incorrectly assumed they already know all about it and will be more likely to fall for AI propaganda in the future.

    eg. Trump posting pictures of him as the pope, with Gaza as a paradise, etc. These still have some AI tells, and Trump is a grifting moron with no morals or ethics, so even if it wasn’t AI you would still be skeptical. But one of these days someone like him that you don’t know ahead of time is going to make an image or a video that’s just plausible enough to spread virally. And it will be used to manufacture legitimacy for something horrible, as other propaganda has in the past.

    but why do we want it? What does it do for us?

    You yourself might not want it, and that’s totally fine.

    It’s a very helpful tool for creatives such as vfx artists and game developers, who are kind of masters of making things not real, seem real. The difference is, that they don’t want to lie or obfuscate what tools they use, but #2 gives them a huge incentive to do just that, not because they don’t want to disclose it, but because chronically overworked and underpaid people don’t also have time to deal with a hate mob on the side.

    And I don’t mean they use it as a replacement for their normal work, or just to sit around and do nothing, but they integrate it into their processes to enhance either the quality, or to reduce time spent on tasks with little creative input.

    If you don’t believe me that’s what they use it for, here’s a list of games on Steam with at least an 75% rating, 10000 reviews, and an AI disclosure.

    And that’s a self perpetuating cycle. People hide their AI usage to avoid hate -> making less people aware of the depths of what it can be used for, making them only think AI slop or other obviously AI generated material is all it can do -> which makes them biased towards any kind of AI usage because they think it’s easy to use well or just lazy to use -> giving people hate for it -> in turn making people hide their AI usage more.

    By giving creatives the room to teach others about what AI helped them do, regardless of wanting to like or dislike it, such as through behind the scenes, artbooks, guides, etc. We increase the awareness in the general population about what it can actually do, and that it is being used. Just imagine a world where you never knew about the existence of VFX, or just thought it was used for that one stock explosion and nothing else.

    PS. Bitcoin is still around and decently big, I’m not a fan of that myself, but that’s just objective reality. NFTs have always been mostly good for scams. But really, these technologies have little to no bearing on the debate around AI, history is littered with technologies that didn’t end up panning out, but it’s the ones that do that cause shifts. AI is such a technology in my eyes.


  • I didn’t say AI would solve that, but I’ll re-iterate the point I’m making differently:

    1. Spreading awareness of how AI operates, what it does, what it doesn’t, what it’s good at, what it’s bad at, how it’s changing, (Such as knowing there are hundreds if not thousands of regularly used AI models out there, some owned by corporations, others open source, and even others somewhere in between), reduces misconceptions and makes people more skeptical when they see material that might have been AI generated or AI assisted being passed off as real. This is especially important to teach during transition periods such as now when AI material is still more easily distinguishable from real material.

    _

    1. People creating a hostile environment where AI isn’t allowed to be discussed, analyzed, or used in ethical and good faith manners, make it more likely some people who desperately need to be aware of #1 stay ignorant. They will just see AI as a boogeyman, failing to realize that eg. AI slop isn’t the only type of material that AI can produce. This makes them more susceptible to seeing something made by AI and believing or misjudging the reality of the material.

    _

    1. Corporations, and those without the incentive to use AI ethically, will not be bothered by #2, and will even rejoice people aren’t spending time on #1. It will make it easier for them to claw AI technology for themselves through obscurity, legislation, and walled gardens, and the less knowledge there is in the general population, the more easily it can be used to influence people. Propaganda works, and the propagandist is always looking for technology that allows them to reach more people, and ill informed people are easier to manipulate.

    _

    1. And lastly, we must reward those that try to achieve #1 and avoid #2, while punishing those in #3. We must reward those that use the technology as ethically and responsibly as possible, as any prospect of completely ridding the world of AI are just futile at this point, and a lot of care will be needed to avoid the pitfalls where #3 will gain the upper hand.


  • This is the inevitable end game of some groups of people trying to make AI usage taboo using anger and intimidation without room for reasonable disagreement. The ones devoid of morals and ethics will use it to their hearts content and would never interact with your objections anyways, and when the general public is ignorant of what it is and what it can really do, people get taken advantage off.

    Support open source and ethical usage of AI, where artists, creatives, and those with good intentions are not caught in your legitimate grievances with corporate greed, totalitarians, and the like. We can’t reasonably make it go away, but we can reduce harmful use of it.


  • While there are spaces that are luckily still looking at it neutrally and objectively, there are definitely leftist spaces where AI hatred has snuck in, even to a reality-denying degree where lies about what AI is or isn’t has taken hold, and where providing facts to refute such things are rejected and met with hate and shunning purely because it goes against the norm.

    And I can’t help but agree that they are being played so that the only AI technology that will eventually be feasible will not be open source, and in control of the very companies left learning folks have dislike or hatred for.







  • Never assumed you did :), but yes, as little assumptions is the best. But as you can already tell, it’s hard to communicate when you take no assumptions when people make explicit statements crafted to dispel assumptions, that are entirely plausible for a hypothetical real person to have.

    In fact, your original statement of “They have no doubts. Never occurred to them it might be a joke…”, is in itself a pretty big assumption. Unless, of course. I assume that statement to be a hyperbole, or even satire. But if we want to have fun talking about a shitpost we do kind of have to decide on an assumptive position on the meme that can’t talk back.


  • People making assumptions is the issue.

    There’s assumptions involved in detecting satire from just text as well. You would just have a Reverse Poe’s law where “any extreme views can be mistaken by some readers for satire of those views without clear indicator of the author’s intent”.

    Normally when people say or type things we (justifiably) assume that to be what they mean, which is why satire works much better when spoken because intonation can make the satire explicit without changing the words or saying it out loud.



  • Yeah, it’s literally all over this thread, not exactly a secret. It’s kind of a weird nitpick of my comment, considering it’s just a way of phrasing things. If I give an alcoholic some money, I will say “they might use that to buy booze”. Because I am sure they buy booze, but they might use my money to buy some food instead. Not every single dollar you give the developers will go to ml.


  • You’re not required to do anything, let alone directly funding ml. That’s not what I am arguing for. I am arguing for you to support Lemmy despite the chance some of it might go to ml.

    It goes the other way too, the developers probably disagree with a large part of the beliefs of people using lemmy, yet they also put in their time to create and foster it, which we never had to pay for either. They did it for the reasons they mention (free spaces, not owned by corporations that suck their users dry), which is separate from their other political positions.


  • While I understand the moral objections people have to supporting the developers, I do think its fair to highlight how they do not treat us.

    We are not a product here to be exploited and advertised to. They also respect your choice to block ml and not to interact with them at all for the rest. I am sure I would be absolutely abhorred by the depth of depravity of your average silicon valley CEO’s hot takes, but they dont share it for this exact reason. Instead they just design their entire product and business around it, which is the enshittification we all know and hate.

    People you dont agree with having a place of their own on the fediverse is a logical consequence of the idea behind it, and while uncomfortable, is a greater good in the end.

    But to maintain that it means putting your money where your mouth. If not to them, to your own instance.