At Auki we often describe our mission as building a decentralized machine perception network for the next 100 billion people, devices and AI on Earth and beyond. Robots and embodied AI will make up a significant percentage of that 100 billion. The development of robotics goes hand in hand with the development of the posemesh (AKA the Auki network). Robotics is also key for the practical and commercial applications of the Auki network in particular.
This is why we have been developing our own robotics capabilities. Having robots built from the ground up to integrate with the Auki network, transforms them into a unified, intelligent system with collaborative and dynamic spatial awareness.
After months of toiling away in secret, we are ready to lift the lid on our team’s impressive progress and share what they have been working on.
Phil Shaw, our resident roboticist, shares his insights into the exciting developments.
For robots and AI to fully integrate with the real world, they don't only need to see, they need to know what they saw and where they saw it to fully understand it.
This first video offers a glimpse into our journey into robotics development so far.
We started small, with a simple Bluetooth-controlled robot navigating under the command of a smartphone. Next, we took to the skies with a drone, exploring new dimensions of mapping, control and automation.
Our focus then shifted to mastering ROS (Robot Operating System), an open-source robotics middleware suite. We used an educational robot to develop foundational robotic skills and techniques.
With this knowledge, we advanced to a ROS-based robot, equipped with a depth camera, utilising SLAM for precision navigation and environmental awareness.
This development is culminated in the commercial reception robot, designed to assist staff with cactus retail tasks, streamlining operations and enhancing efficiency.
However, this was just the beginning. Our next update, took the vision aspect of the perception network to the next level.
Our second video with Phil lifts the curtain on what we're currently working on: providing Cactus AI with eyes with a mobile vision platform.
The mobile vision platform is an autonomous multi-sensor robot utilising the spatially mapped domain to capture high resolution, high stability images with pose for task creation and analytics. With pose is the critical part. The “where” the image was captured is as important as the image itself.
First, we needed a base. We choose Slamtech, a leader in LiDAR technology and robotics, and their Athena 2 robot as our platform.
Why? The Athena 2 offers the best balance of customization with stability and performance. For our initial experiments, we added five 4K cameras mounted vertically. Previous tests with devices like the wearable has taught us that 4K is the only option for the level of detail required.
To process the cameras, we also added a Raspberry Pi 5 connected to the Athena 2 via Ethernet. This is our application brain. The benefits of separating the compute are the freedom to choose operating system and modules, plus the ability to upgrade the compute in the future should we need more CPU or GPU power.
To get the most out of the hardware, we needed to get more acquainted with the control systems, API and onboard mapping and SLAM systems of the Athena 2.
These tests threw up some interesting challenges. Our next video covers how we approached solving them.
In this third video, Phil highlights the continuous iteration and evolution that has characterized our robotics journey. Here we can see how our robotics team tackled the obstacles that came up in our last video.
Finally, we created a visualization of the mobile vision platform within the digital coordinate system known as a domain.
It’s a small detail but being able to see the robot moving dynamically within the domain, highlights the collaborative potential of our technology, enabling seamless interaction between robots and their environment.
Every challenge brings our robotics team closer to our vision: giving spatial awareness to robots, devices and AI. The mobile vision platform is a key component to bringing AI into the physical world. A necessity for a decentralized machine perception network and for AI to reach its full potential.
Stay tuned for more updates on our robotics journey!
Auki is building the posemesh, a decentralized machine perception network for the next 100 billion people, devices and AI on Earth and beyond. The posemesh is an external and collaborative sense of space that machines and AI can use to understand the physical world.
Our mission is to improve civilization’s intercognitive capacity; our ability to think, experience and solve problems together with each other and AI. The greatest way to extend human reach is to collaborate with others. We are building consciousness-expanding technology to reduce the friction of communication and bridge minds.
X | LinkedIn | Medium | YouTube | AukiLabs.com
The Posemesh is an open-source protocol that powers a decentralized, blockchain-based spatial computing network.
The Posemesh is designed for a future where spatial computing is both collaborative and privacy-preserving. It limits any organization's surveillance capabilities and encourages sovereign ownership of private maps of personal and public spaces.
The decentralization also offers a competitive advantage, especially in shared spatial computing sessions, AR for example, where low latency is crucial. The posemesh is the next step in the decentralization movement, responding as an antidote to the growing power of big tech.
The Posemesh has tasked Auki Labs with developing the software infrastructure of the posemesh.
X | Discord | LinkedIn | Medium | Updates | YouTube | Telegram | Whitepaper | DePIN Network