Apple has provided users with two options for capturing 3D “spatial” video, Apple Vision Pro and iPhone 15 Pro, but both options have trade-offs. Here’s what you need to know.
There are multiple differences between the camera capabilities of the Apple Vision Pro and iPhone 15 Pro. Specs aside, the size and form factor of the devices change what can be captured and their practicality.
The most significant disadvantage Apple Vision Pro users will face is social acceptance. It isn’t going to be easy to convince your loved ones to act natural while you’re walking around dressed like Daft Punk, especially somewhere like a wedding.
Shooting video on an iPhone isn’t just accepted — it’s the norm. People may not like being on video, but they’ll react naturally rather than have the entire video just be people asking why you’re wearing that thing on your face.
Beyond social issues, several differences exist between what can be captured on Apple Vision Pro and iPhone 15 Pro. So, understanding the pros and cons of each device will help determine which should be used in every situation.
Capturing spatial video with Apple Vision Pro
Apple Vision Pro is a computer you wear on your face. It must be worn to capture video or photos, so there’s no setting it up on a tripod and walking away.
The spatial video captured is in a square 1:1 format at 2200 pixels by 2200 pixels. It is a near-perfect recreation of the passthrough viewed by the user.
The Apple Photos app shows Apple Vision Pro’s Main Camera has an 18 mm focal length and f/2.0 aperture. Compare that to iPhone’s 24 mm f/1.78 Main Camera.
A minute of video is about 300MB. For comparison, a 4K 30 fps video capture from an iPhone lands at about 190MB after a minute.
Enlarging a spatial video captured on Apple Vision Pro into an immersive view makes it feel like you’re back where the video was captured. The depth information makes everything feel in its place, and since it was captured at eye level, it’s as if you’ve fallen first-person into a memory.
The depth is well recreated and feels natural. Apple’s example of a person blowing out candles or playing with bubbles exemplified the depth detail.
However, plenty of pixels and depth information aren’t all you need for good video. The cameras aren’t near the iPhone’s performance capability, so videos will be dimmer, have more noise, and lack dynamic range.
To get the best results, capture video in bright spaces and attempt to steady yourself by sitting or bracing yourself. Movement makes everything blur more than you’re used to from digital cameras.
Since the camera is attached to your head, you can only film what can be seen by looking. That sounds obvious, but you don’t realize how maneuverable a phone camera is until you’re trying to capture something by moving your neck.
If Apple allowed video capture without wearing the headset, setting up somewhere to passively capture video would be excellent. That’s the problem — there is little to no chance you’ll ever wear Apple Vision Pro to capture video. It is just too impractical.
Until more options exist for passive capture or a lighter headset comes along, spatial photo and video capture from Apple Vision Pro will be nothing more than a party trick. It’s interesting to tinker with, but it won’t be a viable way to capture the format in 99% of cases.
That’s where iPhone comes in.
Capturing spatial video on iPhone 15 Pro
Spatial video capture isn’t enabled by default on iPhone. Users will need to turn it on.
The iPhone 15 Pro and iPhone 15 Pro Max are the only iPhones capable of capturing spatial video. It is captured at 1080p 30 fps and results in a 16:9 format file of 130 MB per minute.
The spatial video captured from an iPhone is a more primitive form of 3D with less depth data. It captures two videos physically spaced by the two cameras.
The resulting video is reminiscent of Apple’s early attempts at Portrait mode. The depth is there, and the image is clear, but the recording falls down if the camera is moved too quickly.
Because of the limited physical separation of the two cameras, the resulting 3D video doesn’t pop out as much as what is recorded on Apple Vision Pro. The trick to getting more stunning depth effects is to be close to the subject with a distant background.
Spatial video on the iPhone is captured at a lower resolution than Apple Vision Pro, but the higher-quality camera more than makes up for it. The difference between the two is night and day in terms of noise, color, and brightness.
Handling an iPhone is more natural and easier than moving your head to frame a video. Ensuring the Apple Vision Pro video is level is also very difficult.
The focus distances of the two camera systems are also very different. The iPhone can get really close to the subject, making small objects appear bigger. Apple Vision Pro puts a lot of distance between the viewer and the object — get too close, and it gets blurry.
Capturing the iMac toy under a desk lamp yielded wildly different results. The Apple Vision Pro video was dim and grainy, while the iPhone video was truer to life but with less depth.
It is impossible to convey from text on a webpage you’re likely reading on a 2D display, but both videos have plenty of depth overall. The difference is that Apple Vision Pro feels more like what you’d see with your eyes versus the iPhone capturing 3D you’d see in a theater.
When to use iPhone versus Apple Vision Pro
Apple Vision Pro can capture stunning spatial video that makes it feel like you’re viewing the scene as if you were there again. It is a different look and feel the iPhone just can’t capture.
That said, it may be some time before it’ll be socially acceptable to wear Apple Vision Pro around an event and not draw undue attention. But, there is a place we feel Apple Vision Pro can fit in — touring spaces.
No, I don’t mean a literal museum tour, but a tour of your spaces. Put on Apple Vision Pro and just walk around your home while talking or interacting with your family.
Walk through your yard in spring when the flowers bloom, or visit your favorite public park or walking path, provided it is safe. Wearing a $3,500 face computer in public could attract unwanted attention.
As Apple Vision Pro becomes more socially acceptable, there might be more opportunities to use the device to capture memories. You’ll just have to gauge the room.
Otherwise, capturing spatial video with an iPhone fits in the same spaces regular video does today. However, moving too fast, like chasing kids, will result in broken-looking footage.
The start of spatial video
Not to discourage anyone, but this is the worst spatial video will ever be. Realize that video captured from Apple Vision Pro or iPhone 15 Pro will age poorly and look dated within a few years.
We advise taking photos and 4K video like you normally would. Don’t miss a critical moment of your life trying to capture it in 3D.
However, do use the spatial video feature. As long as you ensure you’ve got a good mix of regular 4K and spatial video, you’re less likely to regret capturing video in 3D.
Apple may be able to improve aspects of video capture with Apple Vision Pro via software before version two arrives in 2026 or 2027. Otherwise, we’re stuck with that hardware and the video it creates.
The iPhone will undoubtedly get better at spatial video sooner rather than later. Apple’s push into AI will help with processing video and likely increase the captureable resolution, but new cameras with more physical separation will help, too — all possible with iPhone 16 Pro.
Apple is rumored to be bringing basic spatial video capture to iPhone 16 and iPhone 16 Plus. So, if you’re not a pro iPhone buyer, you’ll at least have a chance to start capturing your life in spatial later in 2024.
This story originally appeared on Appleinsider