Sorry to bring this argument to yet another thread, but the only reason why what is fundamentally the exact same feature was generally perceived as a disaster for Microsoft last week and what seems to be a net win for Apple this week is that man, they do seem to understand these things.
“Apple Intelligence” is a very stupid name, though.
I’d say it’s because Apple’s implementation isn’t essentially spyware at it’s core. The Microsoft implementation was straight up deranged and dangerous, frankly.
Nah, it’s exactly the same. Arguably in some aspects more suspect, in that it doesn’t seem to have an opt-out at all and it IS sending some data over the Internet for remote processing.
Presumably better local security than the first version MS announced, but we’ll have to see when compared to the shipping version. Definitely obscuring what they’re actually doing a lot more. It’s Apple magic, not just letting some AI look at your screen and stuff.
But hey, ultimately, that’s my point. The fact that they went on that stage, sold the exact same thing and multiple people are out here, of all places going “no, but this time it’s fine” shows just how much better at selling stuff Apple is. I’m not particularly excited or intend to use either of these, but come on, Apple’s messaging was so far ahead of MS’s on this one.
Apple‘s solution does not require 200gb of screenshots where most personal info is visible in plain text…
Apple wins here because they have a clear structure in their OS and all important data already in Apple‘s own Apps. And they analyze this stuff already very much as one can see with all the Siri suggestions everywhere since, I don’t know 5 years?
microsoft‘s chaos approach in their Windows is now shooting them in their foot real hard.
I hope, that we can get a open source linuxAI to be run locally, that integrates like AppleAI. Should be better possible since, at least, all apps are installed mostly the same way(s) and are designed to be dependent on each other.
I’m not saying anything particularly new and I’m mostly repeating what I’ve been saying since tghe announcement, but I’d argue that all of those caveats are entirely down to branding and PR and not engineering.
App design, yes. Microsoft made their Timeline 2 so that it actually shows you in the UI all the screenshots that it took from you doing stuff and that’s creepy. Apple doesn’t tell you what they’re pulling and they are almost certainly processing it further to get deeper insights… but they do it in the background so you don’t have to think about it as much.
So again, better understanding of the user, messaging and branding. Same fundamental functionality. Way different reactions.
The use cases they have presented are literally asking for a picture you received last week that contained a particular piece of text, selecting the text and copying it over.
I know Apple made it seem like AI is magic, but here in the real world that uses real world computers you need to know what’s on the image to do that.
But hey, no, that’s my point. You understand what taking a screenshot of your desktop looks like. You can grok that to the extent that you can feel weird about the idea of somebody doing that to you every five seconds. You can’t wrap your head around the steps of breaking down all your information to the extent Apple is describing. Yeah, they know exactly what you did and when, and what you looked at and what it said and how it relates to everybody you know and to your activity. But since you can’t intuitively understand what that requires you don’t know enough to feel weird about it.
That right there is good UX, even if the ultimate level of intrusion is the same or higher.
This is not screenshoting, the picture is already a picture which the AppleAI has access to
Apple solves it by having the AI deamon running with relatively low rights and analyse stuff directly through a API where apps expose data for it
This is way less bad than just screenshoting everything and as added bonus, apps can give the AppleAI data not even shown on screen, which is impossible with the Screenshot idea.
Hold on, how is this “low rights” if it’s looking at and reading every single file you have in your device AND every single thing you access online or have remotely stored? Surely from a purely technical standpoint looking at the screen is less access by every reasonable metric. You don’t look at it, the AI doesn’t know about it. Right? Do we have a sense of shared reality here?
Don’t get me wrong, that’s still very effective spyware and I certainly don’t want a screenlogger running on my device, Apple or Microsoft. But if you present to me a system that constantly reads every file you access on any capacity and remembers it, displayed onscreen or not, versus one that looks at your screen… well, the one that looks at your screen knows less about you by any measure. OBS can record your screen, but it doesn’t know what the emails you haven’t read while you’re recording say.
The info is easier to extract, easier to be made human readable, definitely creepier in concept, probably easier to exploit. But less intrusive. Can we at least agree on that?
Sorry to bring this argument to yet another thread, but the only reason why what is fundamentally the exact same feature was generally perceived as a disaster for Microsoft last week and what seems to be a net win for Apple this week is that man, they do seem to understand these things.
“Apple Intelligence” is a very stupid name, though.
I’d say it’s because Apple’s implementation isn’t essentially spyware at it’s core. The Microsoft implementation was straight up deranged and dangerous, frankly.
Nah, it’s exactly the same. Arguably in some aspects more suspect, in that it doesn’t seem to have an opt-out at all and it IS sending some data over the Internet for remote processing.
Presumably better local security than the first version MS announced, but we’ll have to see when compared to the shipping version. Definitely obscuring what they’re actually doing a lot more. It’s Apple magic, not just letting some AI look at your screen and stuff.
But hey, ultimately, that’s my point. The fact that they went on that stage, sold the exact same thing and multiple people are out here, of all places going “no, but this time it’s fine” shows just how much better at selling stuff Apple is. I’m not particularly excited or intend to use either of these, but come on, Apple’s messaging was so far ahead of MS’s on this one.
Apple‘s solution does not require 200gb of screenshots where most personal info is visible in plain text… Apple wins here because they have a clear structure in their OS and all important data already in Apple‘s own Apps. And they analyze this stuff already very much as one can see with all the Siri suggestions everywhere since, I don’t know 5 years? microsoft‘s chaos approach in their Windows is now shooting them in their foot real hard.
I hope, that we can get a open source linuxAI to be run locally, that integrates like AppleAI. Should be better possible since, at least, all apps are installed mostly the same way(s) and are designed to be dependent on each other.
I’m not saying anything particularly new and I’m mostly repeating what I’ve been saying since tghe announcement, but I’d argue that all of those caveats are entirely down to branding and PR and not engineering.
App design, yes. Microsoft made their Timeline 2 so that it actually shows you in the UI all the screenshots that it took from you doing stuff and that’s creepy. Apple doesn’t tell you what they’re pulling and they are almost certainly processing it further to get deeper insights… but they do it in the background so you don’t have to think about it as much.
So again, better understanding of the user, messaging and branding. Same fundamental functionality. Way different reactions.
Yes, but apple doesn’t need to screenshot shit, thats the point
But they do, though.
The use cases they have presented are literally asking for a picture you received last week that contained a particular piece of text, selecting the text and copying it over.
I know Apple made it seem like AI is magic, but here in the real world that uses real world computers you need to know what’s on the image to do that.
But hey, no, that’s my point. You understand what taking a screenshot of your desktop looks like. You can grok that to the extent that you can feel weird about the idea of somebody doing that to you every five seconds. You can’t wrap your head around the steps of breaking down all your information to the extent Apple is describing. Yeah, they know exactly what you did and when, and what you looked at and what it said and how it relates to everybody you know and to your activity. But since you can’t intuitively understand what that requires you don’t know enough to feel weird about it.
That right there is good UX, even if the ultimate level of intrusion is the same or higher.
This is not screenshoting, the picture is already a picture which the AppleAI has access to
Apple solves it by having the AI deamon running with relatively low rights and analyse stuff directly through a API where apps expose data for it
This is way less bad than just screenshoting everything and as added bonus, apps can give the AppleAI data not even shown on screen, which is impossible with the Screenshot idea.
Hold on, how is this “low rights” if it’s looking at and reading every single file you have in your device AND every single thing you access online or have remotely stored? Surely from a purely technical standpoint looking at the screen is less access by every reasonable metric. You don’t look at it, the AI doesn’t know about it. Right? Do we have a sense of shared reality here?
Don’t get me wrong, that’s still very effective spyware and I certainly don’t want a screenlogger running on my device, Apple or Microsoft. But if you present to me a system that constantly reads every file you access on any capacity and remembers it, displayed onscreen or not, versus one that looks at your screen… well, the one that looks at your screen knows less about you by any measure. OBS can record your screen, but it doesn’t know what the emails you haven’t read while you’re recording say.
The info is easier to extract, easier to be made human readable, definitely creepier in concept, probably easier to exploit. But less intrusive. Can we at least agree on that?
It’s opt in
Oh, did I miss that? Did they explain how that works and what AI features are still functional if you don’t turn it on?