During its annual developers conference on Monday, Apple introduced Apple Intelligence, an eagerly anticipated artificial intelligence system designed to personalize user experiences, automate tasks, and, as CEO Tim Cook assured, set “a new standard of privacy in AI.”
Although Apple emphasizes that its AI prioritizes security, its collaboration with OpenAI has faced criticism. The service, launched in November 2022, has raised privacy concerns by collecting user data for model training without explicit consent. Users will have the option to opt out of this data collection starting in April 2023.
Apple has assured that its collaboration with ChatGPT will be limited to specific tasks with explicit user consent, but security experts remain vigilant about how these concerns will be addressed.
Late to the game in generative AI, Apple has trailed behind competitors like Google, Microsoft, and Amazon, whose AI ventures have boosted their stock prices. Apple has refrained from integrating generative AI into its main consumer products.
Apple aims to apply AI technology responsibly, building Apple Intelligence products over several years using proprietary technology to minimize user data leakage from the Apple ecosystem.
AI, which requires vast data to train language models, poses a challenge to Apple’s focus on privacy. Critics like Elon Musk argue that it’s impossible to balance AI integration and user privacy. However, some experts disagree.
“By pursuing privacy-focused strategies, Apple is leading the way for businesses to reconcile data privacy with innovation,” said Gar Ringel, CEO of a data privacy software company.
Many recent AI releases have been criticized for being dysfunctional or risky, reflecting Silicon Valley’s “move fast and break things” culture. Apple seems to be taking a more cautious approach.
According to Steinhauer, “Historically, platforms release products first and address issues later. Apple is proactively tackling common concerns. This illustrates the difference between designing security measures upfront versus addressing them reactively, which is always less effective.”
Central to Apple’s AI privacy measures is its new private cloud computing technology. Apple intends to conduct most computing internally for Apple Intelligence features on devices. For tasks requiring more processing power, the company will outsource to the cloud while safeguarding user data.
To achieve this, Apple will only share the data necessary for each request, implement additional security measures at endpoints, and avoid long-term data storage. Apple will also open tools and software related to its private cloud for third-party verification.
Private cloud computing represents a significant advancement in AI privacy and security, according to Krishna Visnubotra, VP of product strategy at Zimperium. The independent audit component is particularly noteworthy.
Source: www.theguardian.com