On Monday, Apple took the tech world by storm with the long-awaited introduction of its mixed-reality headset, the Vision Pro. Priced at $3499 and set to launch in early 2024, this cutting-edge device is primarily targeted towards developers and content creators. With the Vision Pro, Apple aims to redefine the industry and usher in a new spatial computing era. In this article, we simplify the science behind Apple’s Vision Pro headset and explore its innovative features.
Understanding Apple’s Vision Pro
At its core, the Vision Pro seamlessly blends the digital world with the real world, overlaying digital content onto your physical surroundings. Resembling a pair of ski goggles, this headset leverages a multitude of complex technologies to deliver a user interface and experience that seems deceptively simple.
According to Mike Rockwell, Apple’s vice president of the Technology Development Group, creating the Vision Pro required groundbreaking innovation across every aspect of the system. Through a remarkable integration of hardware and software, Apple has designed a compact wearable device that stands as the most advanced personal electronics device ever created.
How Does the Vision Pro Work?
To comprehend the workings of Vision Pro, it is important to understand its purpose. The mixed reality headset employs a built-in display and lens system to bring Apple’s new visionOS operating system to life in three dimensions. Users can interact with this OS using their eyes, hands, and voice, allowing them to engage with digital content as if it were truly present in the real world.
While promotional videos may suggest transparent glass with overlays like the now-defunct Google Lens, the Vision Pro employs an external display that showcases a live stream of the user’s eyes, making them visible from the outside.
With a total of 23 sensors, including 12 cameras, five sensors, and six microphones, the Vision Pro incorporates the new R1 chip, two internal displays (one for each eye), and an intricate lens system. These components work together to create the illusion of viewing the real world, while in reality, the user experiences a live feed of their surroundings with a digital overlay.
Apple’s R1 chip plays a crucial role in minimizing lag and motion sickness, providing a seamless mixed reality experience. Additionally, the device includes the conventional M2 chip for handling the computational processes necessary to power the applications utilized with the headset.
Inside the headset, infrared cameras track the user’s eye movements, allowing the device to adjust the internal display in real-time and replicate changes in the user’s surroundings based on their eye movements. Moreover, downward-firing exterior cameras track hand movements, enabling users to interact with visionOS through gestures. LIDAR sensors on the outside provide real-time tracking of objects surrounding the Vision Pro.
The Science Behind the Vision Pro
Although we perceive the world in three dimensions, our eyes can only sense objects in two dimensions. The depth perception we experience is the result of our brain’s ability to process two slightly different images from each eye. This cognitive processing creates the perception of depth in our visual field.
The Vision Pro capitalizes on this phenomenon by utilizing two displays that present slightly different images to each eye. This tricks the brain into perceiving a three-dimensional image, providing users with a compelling 3D visual experience.
With the introduction of the Vision Pro, Apple is pushing the boundaries of mixed reality and spatial computing. By seamlessly integrating the digital and real worlds, this groundbreaking headset opens up new possibilities for developers and content creators. With its advanced technologies, including the R1 chip and innovative sensor systems, the Vision Pro sets a new standard for mixed reality experiences. As Apple continues to innovate, it will be exciting to witness how Vision Pro shapes the future of technology and enhances our interactions with the digital realm.