"To put it bluntly, the path currently being taken towards agentic AI leads to an elimination of privacy and security at the application layer.
-
"To put it bluntly, the path currently being taken towards agentic AI leads to an elimination of privacy and security at the application layer. It will not be possible for apps like Signal—the messaging app whose foundation I run—to continue to provide strong privacy guarantees, built on robust and openly validated encryption, if device-makers and OS developers insist on puncturing the metaphoric blood-brain barrier between apps and the OS. Feeding your sensitive Signal messages into an undifferentiated data slurry connected to cloud servers in service of their AI-agent aspirations is a dangerous abdication of responsibility.
Happily, it’s not too late. There is much that can still be done, particularly when it comes to protecting the sanctity of private data. What’s needed is a fundamental shift in how we approach the development and deployment of AI agents. First, privacy must be the default, and control must remain in the hands of application developers exercising agency on behalf of their users. Developers need the ability to designate applications as “sensitive” and mark them as off-limits to agents, at the OS level and otherwise. This cannot be a convoluted workaround buried in settings; it must be a straightforward, well-documented mechanism (similar to Global Privacy Control) that blocks an agent from accessing our data or taking actions within an app.
Second, radical transparency must be the norm. Vague assurances and marketing-speak are no longer acceptable. OS vendors have an obligation to be clear and precise about their architecture and what data their AI agents are accessing, how it is being used and the measures in place to protect it."#AI #GenerativeAI #AgenticAI #AIAgents #Privacy #CyberSecurity #Surveillance
-
undefined Oblomov ha condiviso questa discussione