Apple has a problem. Its so-called smart features—summarised notifications, auto-generated emails, personalised content—haven’t exactly wowed users.
Some say they’re clunky, slow, and miss the point entirely. Now, in a rare pivot, Apple is quietly trying to fix things by tapping into something it has long guarded: your device’s data.
No, they’re not harvesting your emails. But they are sending test data to your iPhone or Mac—if you’ve opted in to share analytics. That’s the catch. They call this process differential privacy, but in real terms, here’s what it means: Apple creates large pools of fake emails, analyses their structure, and then compares them to snippets from real user content—without ever collecting that content.
It’s a workaround. A clever one.
The company admitted in a blog post: “To curate a representative set of synthetic emails, we start by creating a large set of synthetic messages on a variety of topics […] We then derive a representation, called an embedding, of each synthetic message that captures some of the key dimensions of the message like language, topic, and length.”
Apple Sued Over Apple Intelligence Delay; Customers Seek Compensation for Missing AI Features
These “embeddings” are sent to devices participating in Apple’s Device Analytics programme. Those devices then silently check them against your real email samples—not to send back your data, but to inform Apple which of its synthetic datasets hit the mark.
I get why they’re doing it. Apple’s current AI features have fallen behind the likes of OpenAI and Google. Its models are struggling because they’re too reliant on data that doesn’t reflect how people actually communicate. Real-world input is messy, nuanced, and unpredictable. The synthetic stuff? Too polished, too generic.
Apple is hoping this new feedback loop will breathe life into tools like Genmoji, Image Playground, Visual Intelligence, and Writing Tools. And while they claim to respect your privacy—“Synthetic data are created to mimic the format and important properties of user data, but do not contain any actual user-generated content”—the success of this hinges entirely on whether enough users say yes to the background data checks.
There’s more. In the same blog post, Apple explained that it’s using this method to refine summaries, writing suggestions, and even memory-based image creation. It all ties into a bigger initiative the company has branded as “Apple Intelligence”, which—let’s be honest—hasn’t exactly inspired confidence yet.
Interestingly, Apple’s recent management shuffle has its own story. In March, the company stripped Siri from long-time executive John Giannandrea. Control was handed instead to Mike Rockwell, the man behind the Vision Pro, and software boss Craig Federighi. They want results, not promises.
But the timeline is still murky. While the changes are expected to appear in beta versions of iOS 18.5 and macOS 15.5, the more significant AI updates—especially for Siri—won’t land until next year.
It’s a wait-and-see game now. Apple’s new approach is smarter, more private, and frankly overdue. But if people don’t opt in—or if the synthetic datasets still fall flat—then Apple may find itself further behind in a path it used to lead.