Yesterday, at the anticipated Apple iPhone 15 launch event, something remarkable was revealed, and it’s not just about the USB-C or the latest gadgets like the Apple Watch Series 9 and Ultra 2.
It’s about a groundbreaking feature known as “Double Tap.” Imagine, with a simple pinch of your fingers in thin air, you gain control over your Apple Watch screen without touching it.
Double Tap will allow users to tap their index finger and thumb together twice, to answer or end phone calls, play and pause music, or snooze alarms. The hand gesture can also scroll through widgets, much like turning the digital crown.
It’s not just a fun gimmick; it’s the future, and here’s why it’s all about Apple Vision Pro.
For years, the world of virtual reality has grappled with a significant challenge: how to control it effectively. While other industry giants relied on handheld controllers with traditional buttons, Apple’s Vision Pro headset took a revolutionary path earlier this year by using external cameras to track the motion of your hands directly.
Now, with the introduction of the Double Tap feature on the Apple Series 9 and Apple Watch Ultra 2, Apple is pushing the boundaries of using hand movements as input. While the exact mechanics are unclear, I assume the watch’s accelerometer is intricately involved in processing the nuanced movements of your wrist and hands.
Will Double Tap extend to the iPhone and MacBook, allowing Apple Watch users to seamlessly integrate their wrist-computer with all other Apple devices?
I wouldn’t put it past them.
Love it or hate it, Apple’s long-term vision for spatial computing is nothing short of transformative; it’s about the internet materializing in our physical space, allowing us to look through an Apple product rather than just at it.
This profound shift marks a new era in human-computer interaction and our perception of the digital world, and we must recognize that spatial computing is not a mere technological upgrade but a paradigm shift. It’s a shift that challenges us to rethink how we interact with technology and perceive our digital surroundings.
We are on the cusp of a new era where the digital and physical seamlessly converge, opening up endless possibilities for creativity and human connection.
— Damir First, Head of Communications
Auki is building the posemesh, a decentralized machine perception network for the next 100 billion people, devices and AI on Earth and beyond. The posemesh is an external and collaborative sense of space that machines and AI can use to understand the physical world.
Our mission is to improve civilization’s intercognitive capacity; our ability to think, experience and solve problems together with each other and AI. The greatest way to extend human reach is to collaborate with others. We are building consciousness-expanding technology to reduce the friction of communication and bridge minds.
X | LinkedIn | Medium | YouTube | AukiLabs.com
The Posemesh is an open-source protocol that powers a decentralized, blockchain-based spatial computing network.
The Posemesh is designed for a future where spatial computing is both collaborative and privacy-preserving. It limits any organization's surveillance capabilities and encourages sovereign ownership of private maps of personal and public spaces.
The decentralization also offers a competitive advantage, especially in shared spatial computing sessions, AR for example, where low latency is crucial. The posemesh is the next step in the decentralization movement, responding as an antidote to the growing power of big tech.
The Posemesh has tasked Auki Labs with developing the software infrastructure of the posemesh.
X | Discord | LinkedIn | Medium | Updates | YouTube | Telegram | Whitepaper | DePIN Network