- cross-posted to:
- privacy@lemmy.ml
- cross-posted to:
- privacy@lemmy.ml
Google’s AI model will potentially listen in on all your phone calls — or at least ones it suspects are coming from a fraudster.
To protect the user’s privacy, the company says Gemini Nano operates locally, without connecting to the internet. “This protection all happens on-device, so your conversation stays private to you. We’ll share more about this opt-in feature later this year,” the company says.
“This is incredibly dangerous,” says Meredith Whittaker, the president of a foundation for the end-to-end encrypted messaging app Signal.
Whittaker —a former Google employee— argues that the entire premise of the anti-scam call feature poses a potential threat. That’s because Google could potentially program the same technology to scan for other keywords, like asking for access to abortion services.
“It lays the path for centralized, device-level client-side scanning,” she said in a post on Twitter/X. “From detecting ‘scams’ it’s a short step to ‘detecting patterns commonly associated w/ seeking reproductive care’ or ‘commonly associated w/ providing LGBTQ resources’ or ‘commonly associated with tech worker whistleblowing.’”
One of the things they glided around was whether a lot of this on-device stuff needs a special processor chip with AI+security to work?
The Pixel phones (especially newer ones) made by Google have them, but the vast majority of Android phones don’t.
So either these features only work on latest Google phones (which will piss off licensees and partners), or they’re using plain old CPU/GPUs to do this sort of detection, in which case it will be sniffable by malicious third-parties.
And let’s not forget that if the phone can listen to your conversation to detect malicious intent, any country can legally compel Google to provide them with the data by claiming it is part of a law-enforcement investigation.
Things are going to get spicy in Android-land.