• 0 Posts
  • 22 Comments
Joined 2 years ago
cake
Cake day: June 13th, 2023

help-circle
  • the issue is that this is a lot of assumption on the comment’s intention in their response to OP. i feel the emphasis keeps moving back to how they misinterpreted OP, and their failing in doing so. i’m both recognizing their ‘failing,’ but also suggesting that it is more of an issue on how people are interpreting it as invalid via their own biases and preferences.

    not projecting the same preference becomes seen as ‘misreading the room,’ rather than a valid response for a different type of person. it becomes assumed as intentionally, or definitively ‘rude’ rather than just a different, and still valid way of responding to the information provided for some people.

    i assume nothing negative was meant by it, even if it wasn’t the implied commiseration op was looking for, this does not make it suddenly antagonistic. the issue is that so many view it immediately as antagonistic or ‘wrong,’ where it could have been entirely valid were i OP, and saying the same thing as OP. we all have many blindspots, and some things aren’t always salient.

    if you experience this reaction every time society sees that you interpreted things differently, you get a bunch of autistic people (or other groups in preference/experiential minority) hating life. this is also indicative of many other communication failures due to excess fitting towards homogeneity and unconsciously creating social rules to keep things simple and energy free. if you are a surprising element, you get chastised for making others expend energy interpreting your model, because you haven’t successfully been beaten into being less noticeable, even if it completely denies your lived reality. see gay conversion therapy/ABA (same source) for how that tactic is often applied.

    not to escalate, but a constant barrage of these experiences, often without such context being given, leads to many otherwise well-adjusted autistic people hating life, and opting out enitrely. this is why i feel compelled to promote understanding of the different styles of interpretation. i don’t want to lose any more friends.

    many autistic people are already trying, but the communication failure isn’t just on their side of the interaction. but it’s easier to tar and feather the person as an easy pariah than to try and consider how the perspective may have had intention less as a slight, and more as a valid recommendation for those who have a different dialect for interpretating “…see a movie.”

    i suggest looking up any autistic experiences, because a lot boil down to trauma of escalated antagonism just for existing and not already having the exact preferences of others, which makes predicting them impossible without a doctorate in non-autistic preference modelling, and writing that over your whole existence any time you interact with the public.

    also understanding the double empathy problem can help with many other communication difficulties in non homogeneous groups


  • makes sense. i’m coming to see how people do this, but it’s still baffling to me. by ‘this’ i mean socially affirming each-other, rather than trying to interact with the issue in any way. not just as preferred, but as a forced exclusive.

    also legitimately sorry that i can’t compress the whole picture to a quick quip.

    but what i meant by my comment was as much asserting that the comment being downvoted to oblivion was possibly more misinterpreted in intent and meaning than their own interaction with OP’s meme.

    i see it as low dimensional communication exacerbating the size of blindspots for the whole of what is being communicated, because everyone is trying to reduce the energy consumption of language by socially affirming heuristics built on salient preference. this can be mapped to first principles from friston’s free energy principle, into active inference. MITpress has a good textbook for it, although there’s been a lot of new work since then. those who don’t naturally share that preference become ‘wrong’ for communicating what they could interpret without having that same importance given tothings they might not think about, like social ego stroking over just interacting with the concept sans ego.

    more commonly, people are becoming familiar with the ‘double empathy problem’ basically a context and language equivalent to yelling at the autistic kid for not making levels of eye contact that they find painfully intimate and uncomfortable. yes, the local community can think eye contact is ‘just having basic manners’ or ‘just being a decent person,’ but forcing them to do it, and creating a majority salient confirmation bubble chastising them for not doing it constantly and confidently is salt in the wound.

    again, thank you for reading this far if you has. none of this is accusatory towards anyone, just an honest attempt at noting current popular communication failures and how to frame them.

    the double empathy problem also applies to most predictive models projecting in differently socialized spaces. it’s good for people to comprehend.


  • As an autistic person who sees information sharing as more valid and respectable than affirming possible ignorant perspectives for the sake of obtuse social saliency, all I see is a fact and a valid question.

    Also valid advice for those with money. If you can save money from a theater ticket to another Disney slop live action remake, and donate that money to independent artists trying to survive and simultaneously have a voice despite the Disney/warner types stranglehold over sellable cinema for most public spaces.

    People get so upset when anything questions their current trajectory, rather than saying “oh yeah, that’s a valid perspective to avoid the issue in context.”

    And gets a lot of autistic people yelled at for doing their job or trying to help, IMO.

    Is there a reason the advice and question aren’t valid? To me the only rudeness here is in the framing of the rebuke.

    This isn’t trying to one up anyone, this is an attempt to communicate, and improve people’s ability to communicate.

    I’ve even seen doctors excuse bullying of autistic children because the child joined discussion of test scores without pandering to the ego of people that were socially affirming how terrible the test must be, due to their performance.

    At this point people need to start trying to understand the double empathy problem, because it’s valid for more cases of communication differences than just autism.

    Thank you for reading!


  • i think it’s a framing issue, and AI development is catching a lot of flak for the general failures of our current socio-economic hierarchy. also people having been shouting “super intelligence or bust” for decades now. i just keep watching it get better much more quickly than most people’s estimates, and understand the implications of it. i do appreciate discouraging idiot business people from shunting AI into everything that doesn’t need it, because buzzword or they can use it to exploit something. some likely just used it as an excuse to fire people, but again, that’s not actually the AI’s fault. that is this shitty system. i guess my issue is people keep framing this as “AI bad” instead of “corpos bad”

    if the loom was never invented, we would still live in an oppressive society sliding towards fascism. people tend to miss the forest for the trees when looking at tech tools politically. also people are blind to the environment, which is often more important than the thing itself. and the loom is still useful.

    compression and polysemy growing your dimensions of understanding in a high dimensional environment, which is also changing shape, comprehension growing with the erasure of your blindspots. collective intelligence (and how diversity helps cover more blindspots) predictive processing (and how we should embrace lack of confidence, but understand the strength of proper weighting for predictions, even when a single blindspot can shift the entire landscape, making no framework flawless or perfectly reliable.) and understanding how everything we know is just the best map of the territory we’ve figured out so far. if you want to know judge how subtle but in our face blindspots can be, look up how to test your literal blindspot, you just need 30 seconds a paper with two small dots to see how blind we are to our blindspots. etc.

    more than fighting the new tools we can use, we need to claim them, and the rest of the world, away from those who ensure that all tools will only exist to exploit us.

    am i shouting to the void? wasting the breath of my digits? will humanity ever learn to stop acting like dumb angry monkeys?


  • let’s make another article completely misrepresenting opinions/trajectories and the general state of things, because we know it’ll sell and it will get the ignorant fighting with those who actually have an idea of what’s going on, because they saw in an article that AI was eating the pets.

    please seek media sources that actually seek to inform rather than provoke or instigate confusion or division through misrepresentation and disinformation.

    these days you can’t even try to fix a category error introduced by the media without getting cussed out and blocked from congregate sites because you ‘support the evil thing’ that the article said was evil, and everyone in the group hates, without even an attempt to understand the context, or what part of the thing is even being discussed.

    also, can we talk more about breaking up the big companies so they don’t have a hold on the technology, rather than getting mad at everyone who interacts with modern technology?

    legit ss bad feels like fighting rightwing misinformation about migrant workers and trans people.

    just make people mad, and teach them that communication is a waste of energy.
    we need to learn how to tell who is informing rather than obfuscating, through historicity of accuracy, and consensus with other experts from diverse perspectives. not building tribes upon who agrees with us. and don’t blame experts for not also learning how to apply a novel and virtually impossible level of compression when explaining their complex expertise, when you don’t even want to learn a word or concept. it’s like being asked to describe how cameras work, and then getting called an idiot when some analogy used can be imagined in a less useful context that doesn’t apply 1:1 with the complex subject being summarized.

    outside of that, find better sources of information. fuck this communication disabling ragebait.

    cause now just having a history of rebuking this garbage gets you dismissed, because a history of interacting with the topic on this platform is a good enough vibe check to just not attempt understanding and interaction.

    TLDR: the quality of the articles and conversation on this subject are so generally ill-informed that it hurts, and obviously trying to craft environments of angry engagement rather than informing.

    also i wonder if anyone will actually engage with this topic rather than get angry, cuss me out, and not hear a single thing being communicated.


  • Or maybe the solution is in dissolving the socio-economic class hierarchy, which can only exist as an epistemic paperclip maximizer. Rather than also kneecapping useful technology.

    I feel much of the critique and repulsion comes from people without much knowledge of either art/art history, or AI. Nor even the problems and history of socio-economic policies.

    Monkeys just want to be angry and throw poop at the things they don’t understand. No conversation, no nuance, and no understanding of how such behaviours roll out the red carpet for continued ‘elite’ abuses that shape our every aspect of life.

    The revulsion is justified, but misdirected. Stop blaming technology for the problems of the system, and start going after the system that is the problem.



  • It’s the “you stole my style” artists attacking artists all over again. And digital art isn’t real att/cameras are evil/cgi isn’t real art all over with a more organic and intelligent medium.

    The issue is the same as it has always been. Anything and everything is funneled to the rich and the poor blame the poor who use technology, because anthropocentric bias makes it easier to vilify than the assholes building our cage around us.

    The apple “ecosystem” has done much more damage than AI artists, but people can’t seem to comprehend how. Also Disney and corpos broke copyright so that its just a way for the rich to own words and names and concepts, so that the poor can’t use them to get ahead.

    All art is a remix. Disney only became successful using other artists hard work in the Commons. Now the Commons is a century more out of grasp, so only the rich can own the artists and hoard the growth of art.

    Also which artists actually have the time and money to litigate? I guess copyright does help some nepo artists.

    Nepotism is the main way to earn your right to invest into becoming an artist that isn’t fatiguing towards collapse of life.

    But let’s keep yelling at the technology for being evil.


  • That argument was to be had with Apple twenty years ago as they built their walled garden, which intentionally frustrates people into going all in apple. Still can’t get anyone to care about dark patters/deceptive design, or disney attacking the creative Commons which it parasitically grew out of. AI isn’t and has never been the real issue. It’s just absorbs all the hate the corpos should be getting as they use it, along with every other tool at their disposal, to slowly fuck us into subservience. Honestly, AI is teaching us the importance of diverse perspectives in intelligent systems, and the dangers of overfitting, which exist in our own brains and social/economic systems.

    Same issue, different social ecosystem being hoarded by the wealthy.



  • I see intelligence as filling areas of concept space within an econiche in a way that proves functional for actions within that space. I think we are discovering more that “nature” has little commitment, and is just optimizing preparedness for expected levels of entropy within the functional eco-niche.

    Most people haven’t even started paying attention to distributed systems building shared enactive models, but they are already capable of things that should be considered groundbreaking considering the time and finances of development.

    That being said, localized narrow generative models are just building large individual models of predictive process that doesn’t by default actively update information.

    People who attack AI for just being prediction machines really need to look into predictive processing, or learn how much we organics just guess and confabulate ontop of vestigial social priors.

    But no, corpos are using it so computer bad human good, even though the main issue here is the humans that have unlimited power and are encouraged into bad actions due to flawed social posturing systems and the confabulating of wealth with competency.


  • While I agree about the conflict of interest, I would largely say the same thing despite no such conflict of interest. However I see intelligence as a modular and many dimensional chip concept. If it scales as anticipated, it will still need to be organized into different forms of informational or computational flow for anything reassembling an actively intelligent system.

    On that note, the recent developments with six like RXinfer are astonishing given the current level of attention being paid. Seeing how llms are being treated, I’m almost glad it’s not being absorbed into the hype and hate cycle.


  • As always, the problem is our economic system that has funneled every gain and advance to the benefit of the few. The speed of this change will make it impossible to ignore the need for a new system. If it wasn’t for AI, we would just boil the frog like always. But let’s remember the real issue.

    If a free food generating machine is seen as evil for taking jobs, the free food machine wouldn’t be the issue. Stop protesting AI, start protesting affluent society. We would still be suffering under them even if we had destroyed the loom.



  • Perhaps instead we could just restructure our epistemically confabulated reality in a way that doesn’t inevitably lead to unnecessary conflict due to diverging models that haven’t grown the necessary priors to peacefully allow comprehension and the ability exist simultaneously.

    breath

    We are finally coming to comprehend how our brains work, and how intelligent systems generally work at any scale, in any ecosystem. Subconsciously enacted social systems included.

    We’re seeing developments that make me extremely optimistic, even if everything else is currently on fire. We just need a few more years without self focused turds blowing up the world.


  • AI or no AI, the solution needs to be social restructuring. People underestimate the amount society can actively change, because the current system is a self sustaining set of bubbles that have naturally grown resilient to perturbations.

    The few people who actually care to solve the world’s problems are figuring out how our current systems inevitably fail, and how to avoid these outcomes.

    However, the best bet for restructuring would be a distributed intelligent agent system. I could get into recent papers on confirmation bias, and the confabulatory nature of thought. Both on the personal level, group level, and society level.

    Turns out we are too good at going with the flow, even when the structure we are standing on is built over highly entrenched vestigial confabulations that no longer help.

    Words, concepts, and meanings change heavily depending on the model interpreting them. The more divergent, the more difficulty in bridging this communication gap.

    a distributed intelligent system could not only enable a complete social restructuring with autonomy and altruism both guaranteed, but with an overarching connection between the different models at every scale, capable of properly interpreting the different views, and conveying them more accurately than we could have ever managed with model projection and the empathy barrier.


  • The main issue though is the economic system, not the technology.

    My hope is that it shakes things up fast enough that they can’t boil the frog, and something actually changes.

    Having capable AI is a more blatantly valid excuse to demand a change in economic balance and redistribution. The only alternative would be destroy all technology and return to monkey. Id rather we just fix the system so that technological advancements don’t seem negative because the wealthy have already hoarded all new gains of every new technology for this past handful of decades.

    Such power is discretely weaponized through propaganda, influencing, and economic reorganizing to ensure the equilibrium stays until the world is burned to ash, in sacrifice to the lifestyle of the confidently selfish.

    I mean, we could have just rejected the loom. I don’t think we’d actually be better off, but I believe some of the technological gain should have been less hoardable by existing elite. Almost like they used wealth to prevent any gains from slipping away to the poor. Fixing the issue before it was this bad was the proper answer. Now people don’t even want to consider that option, or say it’s too difficult so we should just destroy the loom.

    There is a markov blanket around the perpetuating lifestyle of modern aristocrats, obviously capable of surviving every perturbation. every gain as a society has made that reality more true entirely due to the direction of where new power is distributed. People are afraid of AI turning into a paperclip maximizer, but that’s already what happened to our abstracted social reality. Maximums being maximized and minimums being minimized in the complex chaotic system of billions of people leads to inevitable increase of accumulation of power and wealth wherever it has already been gathered. Unless we can dissolve the political and social barrier maintaining this trend, it we will be stuck with our suffering regardless of whether we develop new technology or don’t.

    Although doesn’t really matter where you are or what system you’re in right now. Odds are there is a set of rich asshole’s working as hard as possible to see you are kept from any piece of the pie that would destabilize the status quo.

    I’m hoping AI is drastic enough that the actual problem isn’t ignored.




  • I definitely agree that copyright is a good half century in need of an update. Disney company and other contemporaries should never have been allowed the dominance and extension of copywrite that allows what feels like ownership of most global artistic output. They don’t need AI, they have the money and interns to create whatever boardroom adjusted art they need to continue their dominance.

    Honestly I think the faster AI happens, the more likely it is that we find a way out of the social and economical hierarchical structure that feels one step from anarcho-capitalistic aristocracy.

    I just hope we can find the change without riots.