I thought from the Apple keynote that Siri is getting a big update to be based on Apple Intelligence, not that this context stuff was getting hacked into the existing Siri model. They talked about new voice transcription features, the ability to correct yourself while talking, deep knowledge of your personal context, etc.
It sounds like a bigger update, where they’re applying gen AI models more broadly across tons of things (including I things like photo categorization), but I guess we’ll see.
It sounds like a bigger update, where they’re applying gen AI models more broadly across tons of things (including I things like photo categorization), but I guess we’ll see.