This is Gruber at his best. Both critical at the shortcomings of the first generation but also looking beyond that to try to grasp the power of the concept. As well as the highlights of the first generation.
I definitely cannot await the time until it comes to Europa so I can schedule a test session. I see several great use cases - beyond entertainment I do quite a bit of photography, so it could be the ultimate image viewers. I think the spatial computing might be the most groundbreaking part and also the one part which takes longest to get you really productive. In any case, I want to try to hold out to the 2nd or third generation to get to its full potential as I assume it will stay expensive for quite a while. So thinking of the equivalent of the iPhone 3 or rather 4s. At which the platform had quite stabilized and the biggest change was in the growing of the iPhones 6 and later.
> The fundamental interaction model in VisionOS feels like it will be copied by all future VR/AR headsets
Apple loves to claim that they invented everything, last I recall was the mute button on browser tabs. This is another case of that. This can be done with almost all headsets today.
You couldn't do it with Apple products, so I guess there's that.
This is a paid PR piece. It's not a review. Claiming that issues on a $3000 device "are fine because it's the first iteration" is not reasonable considering that multiple competitors have those issues largely solved for far less.
It's Gruber. He's got strong ties to Apple insiders (he had many exclusives in the past) and he's a big fan. Personally I also feel that shining through in his blog, not just this time. I wouldn't call it paid PR but I do sense some loyalty too. So I take it with a grain of salt :) Besides the usual "Apple is amazing" subtext I find this article more balanced than he usually is though.
And indeed meta already used similar interaction with finger pinching for the last 2 years. Just not the eye tracking because it lacks the hardware. As most reviewers have pointed out though, it's not very comfortable having to look at what you interact with. This is what I found too using other eye tracking headsets.
With my meta quests I prefer the controllers anyway which Apple doesn't have. But even if it comes to Europe it's so ridiculously out of my price range to consider anyway (and I'm a VR enthusiast). But of course incomes vary.
Ps from the reviews I found the Verge's the best, they managed to break through the reality distortion field the best in my opinion.
But the lack of a killer app, something to offset the huge price tag by offering something you couldn't do before that's so incredibly valuable it becomes worth it, that's just missing. In my opinion that's even more of a problem than the physical limitations of today's tech.
> This can be done with almost all headsets today.
I’ve owned a Rift, Rift S, Quest 2, Quest 3, and a Pimax Crystal QLED. None of them supported this interaction style.
Perhaps you misunderstood the description? This seems to allow you to look at an element (without moving your head) and pinch to select it (without moving your arms off your lap or desk).
You can do the hand stuff if you adhere a leap motion to the front of the HMD - I was doing that during the DK2 era. Leap Motion had additional gestures beyond gaze (gaze interaction isn't a first here, but admittedly gaze-and-pinch is), for example, you could attach a controller to the back of your hand - almost like a watch. You could then interact with that controller, tactile feedback and all. There's also the natural gesture of holding something, but that was rather pedestrian compared to the other UX ideas that Leap were exploring.
I think what's "wowing" people is the sensation of having your hands appear in VR. From personal experience it does take presence (as the formal VR term) to a whole new level.
In addition, adding 2D windows to 3D space isn't really "spatial computing." The pinnacle of VR/AR has always been a 3D interface, in my opinion. I have no idea how that would look/work (I do believe Leap were on their way to a solution), but it's certainly not slapping existing concepts into 3D space in the laziest - most demoable - way possible. Tilt Brush would be further along the spectrum in VR UX, but they only solved for drawing.
This isn't a leap forward. It's yet another idea that fails to hit the mark. While Apple isn't alone in that regard, nobody else is asking $3000 for 2D windows in 3D space.
I have a LEAP Motion, that I bought shortly after release around a decade ago. I've used it quite a bit, but you still have to place it under your hands - they can't rest on the desktop, which means fatigue over time. That said, it's incredibly useful for manipulation of 3D objects, which was my use case back then.
> I think what's "wowing" people is the sensation of having your hands appear in VR.
Oh, for sure. A lot of the reviews that I'm seeing are from people who obviously haven't been following the market over the past few years.
> In addition, adding 2D windows to 3D space isn't really "spatial computing."
I agree. At this point I see "spatial computing" as a promise, not as something this specific device will deliver.
For that matter, it looks like the Mac screen representation is not nearly as good as it could be. I've been speaking to the developers of Immersed (which I've used a ton on Quest 2), and they're working hard right now to release on the AVP. That will include multiple, virtual monitors with configurable resolution and aspect ratio. I expect that will be a turning point for me productivity-wise.
> This isn't a leap forward. It's yet another idea that fails to hit the mark. While Apple isn't alone in that regard, nobody else is asking $3000 for 2D windows in 3D space.
I don't think the AVP is intended to be a mass-market device. Having not yet laid hands on one - I expect to have mine by the end of next week - it appears to be at the very bleeding edge of what's technically possibly hardware-wise, and a lot of thought has gone into building and polishing visionOS. That's not enough to justify the cost.
I see three markets for it today:
* early-adopters/enthusiasts, who don't really need to justify the purchase beyond wanting it. These will provide some revenue but the devices will mostly collect dust.
* "influencers", who will (attempt to) recoup the cost through videos posted on social media. These people will effectively be marketing the product for Apple, and otherwise not contribute to the ecosystem
* developers. Apple needs applications for this platform, which they're obviously throwing a huge amount of money at. I see myself in this group. I plan to use my AVP for my day-to-day work, but I'm also planning to build at least a few small apps to explore the APIs with the intention of publishing at least one highly-polished app in the near future. If I can get anywhere near the top of a category in the visionOS App Store by the time Apple releases a lighter version of this at a price point of ~$2k, I have zero doubt I'll be able to more than make back the cost of the device itself.
> I have a LEAP Motion, that I bought shortly after release around a decade ago. I've used it quite a bit, but you still have to place it under your hands - they can't rest on the desktop, which means fatigue over time.
This is substantially different from attaching it to the HMD. Either way, my original point was Apple claiming to have invented things that they did not invent and the notion that this device presents any leap in terms of UX.
It's value is that it's the only HMD that Apple integrates with, which covers all your use cases, and that's fine. I'm happy to see more people exposed to VR in the hope that it does become mainstream. Apple claiming ideas as their own is plagiarism, which is not fine.
> Windows remain anchored in place in the world around you. Set up a few windows, then stand up and walk away, and when you come back, those windows are exactly where you left them.
I wonder what the limits of that are. Presumably this works across sessions (meaning when you take off the AVP in between). Are the windows still there when you spend a weekend elsewhere? After a longer vacation? Are the windows still there in your vacation home when you return to it next year?
GPS presumably isn’t precise enough to “remember” specific walls in a house (if the AVP even has GPS). So how does this work?
He notes that holding the crown button down brings all your windows from wherever they were to where you are now. So very long term window positioning seems unlikely if only just because you’re probably likely to use that feature with some regularity. But it would be interesting to know if you didn’t push the button for a couple of days whether an “office” and “home” setup could persist.
For Hololens, the answer to all those questions is ‘yes’. The windows last forever — unless the room/space changes so much that the spatial mapping can no longer recognize it.
Microsoft calls this permanent layout feature ‘spatial anchors’, Google calls it ‘Geospatial anchors’, Apple calls it ‘Content Anchors’ aka ARAnchor.
Spatial mapping uses many sensors, it’s way beyond GPS. These APIs have existed since the beginning of WMR/ARCore/ARKit, 2016-2017.
And their locations will be remembered (by ID) if an app wants to offer persistent experiences at multiple locations. Not so sure about windows on visionOS.
Any reports on fatigue, neck strain, etc? I can't stand more than an hour or so in VR before I have to get out. It's warisome for long periods. I'd hate to have to work in one.
“I’ve used it for hours at a time without any discomfort, but fatigue does set in, from the weight alone. You never forget that you’re wearing it.”
“In terms of resolution, Vision Pro is astonishing. I do not see pixels, ever. I see text as crisply as I do in real life. It’s very comfortable to read.”
I’ve been using AirPods Max for years now, which many people complain about being “heavy”, but I don’t mind them. I expect the weight will be fine, but it is obviously there, and like exercising any new muscle, it might take some time.
I'm glad he talked a bit about what this thing is like with a trackpad / cursor, because for productivity that seems like it's the only real way to use this device.
I think they missed an opportunity by not making the surface of the battery a trackpad. I imagine that would be an added convenience for some use cases.
Your comment reminded me of the way Apple didn't put cursor keys on the keyboard for the first Macintosh model because they wanted to force users to use the mouse.
Oh god. I love that it's a move to market a feature, but it's so disrespectful of a user's expectations. Wonder if this kind of trade off still happens there or if it's just a byproduct of its time.
It wasn’t to force users to use the mouse. It was to force developers to create software that used the new paradigm rather than just porting over CLI apps.
Apple still uses this tactic a lot, and it’s one of the things that irritates people most about them. I’m glad they do it, but it’s understandable why it pisses people off.
> I think they missed an opportunity by not making the surface of the battery a trackpad.
Apple for the most part doesn't merge two unrelated functions in a peripheral or component.
For example, they never did the "2 in 1" thing with the Mac that Windows vendors did by having a tablet using a touch interface that was also a laptop using a keyboard and mouse. The result was a device that wasn't great at being either a laptop or a tablet. I have a Surface 2 in 1 and it's not a particularly good experience.
The early adopters of the Apple Vision Pro will already have mice, trackpads, etc. on their Macs and iPads that will work just fine.
I'm usually pretty quick to criticize Apple (and Gruber's unapologetic fanyboy tendencies) when warranted, but this is a bit much. You basically just reproduced the things Gruber acknowledges as negatives, and ignored all the positives. Not really a useful -- or honest -- take.
You can't gaze and finger-to-thumb click something in the virtual Mac display?
Edit - I guess you'll be in front of your computer anyway, and the entire Mac UI was designed for a keyboard and mouse. Probably defaulting to using only keyboard and mouse when using the virtual display.
Apparently not. Maybe that's why they don't allow splitting mac windows into your space yet, it would be too confusing to remember where it's enabled and where it's not
I believe it requires a keyboard and mouse to interact with it. In practice for most things the tap targets would be quite small and without being able to “snap” to UI elements you probably wouldn’t want to do this anyway.
I definitely cannot await the time until it comes to Europa so I can schedule a test session. I see several great use cases - beyond entertainment I do quite a bit of photography, so it could be the ultimate image viewers. I think the spatial computing might be the most groundbreaking part and also the one part which takes longest to get you really productive. In any case, I want to try to hold out to the 2nd or third generation to get to its full potential as I assume it will stay expensive for quite a while. So thinking of the equivalent of the iPhone 3 or rather 4s. At which the platform had quite stabilized and the biggest change was in the growing of the iPhones 6 and later.