Hi everyone!
I’m excited to share that our new World Tracking for WebAR (beta) is now ready for you to use in Zapworks Studio as well as our Universal AR SDKs, for Unity, Three.js, A-Frame, Babylon.js, React Three.js, PlayCanvas and JavaScript.
When it comes to Augmented Reality, world tracking (content tracked to the real world) is one of the most impressive and immersive tracking types, giving users the freedom and flexibility to experience content in the world around them.
This latest update to the Zappar computer vision libraries brings an entirely new implementation for our existing Instant World Tracking API that supports a much wider range of use-cases.
Here’s a sample experience built internally using our World Tracking implementation to showcase mini-games in WebAR:
Try it for yourself
You can try our ‘Whack-A-Mole’ experience for yourself by scanning the QR below, or by tapping on this link if you’re on your smartphone.
What’s new?
Our new approach is a from-the-ground-up reimplementation that constantly refines an estimated 3D map of world points whilst also updating the position of the camera within that map. This is a form of “Simultaneous Localization and Mapping”, or SLAM for short.
Our previous Instant World Tracking approach required a relatively dense set of features to be located on the horizontal placement surface, but this new implementation removes that restriction - the algorithm will make use of points from across the image regardless of their 3D arrangement.
We now build a broader and more general understanding of the 3D structure of the environment, which allows users to move more freely without the need to keep the original anchor point in view.
Live Webinar: New & improved World Tracking
I recently went through Zappar’s new World Tracking implementation during a live learning session. Watch the recording below for a more detailed breakdown.
What devices and browsers are supported?
At Zappar we always strive to enable your experiences to reach as many users as possible, on the devices and browsers that they are already using.
For our new World Tracking implementation, we combine data from the web APIs for camera access and motion sensor updates. These APIs have been available in both iOS and Android for many years. We support the Safari browser on iOS all the way back to iOS 11.3, and third party iOS browsers since iOS 14.2. The vast majority of browsers on Android are also supported.
As world tracking leverages accurate motion sensor data, this tracking type requires the device to include a gyroscope sensor. These have been included in every iOS device (all iPod touches, iPhones, and iPads) for over 10 years, and are also present in the vast majority of Android phones that are currently in use. The Android tablet ecosystem is a little more hit-and-miss, and laptops often lack both a gyroscope and a rear-facing camera so tend not to be suitable devices for world tracking.
Are there any limitations?
The web platform has some inherent limitations when compared against native APIs that make delivering world tracking in the web a challenging endeavour.
In particular, data from motion sensors is provided with lower precision and sample rates than would be available natively, the relative position of the camera lens and motion sensors within a device is not exposed, and accurate timestamps are not provided with camera and motion data. All of these combine to make estimating camera motion and world map feature locations in absolute-scale units (ie metres) a more complex challenge. For that reason, this first milestone build uses arbitrary units for the map and camera motion and so can’t be used to deliver precise 1:1 scale visualisations.
Any camera-based world tracking solution requires a certain amount of visual detail on surfaces in order for them to be detected, so plain white tables for example will not contain sufficient features to support placing content. Some devices such as the iPhone 12 Pro are able to use additional “active depth” sensors like Lidar to detect these surfaces in native ARKit-powered apps, but as that sensor data is not exposed to the web our implementation will always require some visual detail on the placement surface.
Our current first-milestone approach requires a single textured feature near to the placement point, but over the coming months we’ll be generalising that placement logic to support recognizing flat surfaces even if there isn’t a visual feature right next to the placement position.
In this milestone we have focused on smoothness of tracking rather than longer-term position consistency - in short, if you look away from the placement position and look back again you may notice the content has moved slightly but then it will “lock back on” to a world feature near the placement point when it’s back in view. Right now the original placement point is the only point that has this reattachment behaviour but we’re working on specific behaviour for large experiences (such as portals) to ensure they always stay aligned with the ground even when the original placement point is outside of the current camera view.
What’s the roadmap?
The near-term roadmap during the beta period will see us working on the improvements mentioned above alongside continuous quality and performance updates. On top of this, the new implementation lays the foundation for additional capabilities that we’ve already started working towards:
- Multiple anchors
- Explicit surface detection
- Ground plane detection + anchoring
- Extended tracking (either from Image Tracking or other markers)
We’re also investigating the feasibility of absolute scale estimation, even within the limitations of the web platform discussed above.
On the content side, you can also look forward to building world tracking experiences in our recently released no-code Zapworks Designer tool in the coming months, which will open up the ability to create world tracking experiences to a whole new audience.
How can I get started?
Universal AR SDKs
Our World Tracking beta is ready to use in all our Universal AR SDKs right now. To start creating, head over to our getting started guide on the Forum.
Zapworks Studio
I’m also excited to share World Tracking is ready to use in our AR-first 3D engine, Zapworks Studio as well.
Any content built with Studio’s Z.InstantTracker() is compatible with the new implementation. Simply scan your zapcode in the https://wtbeta.arweb.app site to try it out
We can also deploy this version of the Studio WebAR runtime to any custom WebAR sites for more permanent deployments. For API details, see this forum thread.
Anything else?
We’re thrilled to be able to share this first beta release of our all-new World Tracking for WebAR implementation. We think it’s a significant step forward and makes a much wider range of experiences possible in WebAR built in Zapworks Studio or powered by our Universal AR SDKs.
I’d like to thank Shuang Liu in particular for his work leading the development of this project on the computer vision side over the last year or so, and the rest of the awesome Zappar team who have contributed to all the needed updates across docs and our various SDK packages to get things in place for this beta release.
This is just the first milestone on our journey to offering a comprehensive and high performance World Tracking solution for WebAR and we’re really excited to continue delivering against our roadmap in the coming months.
Feel free to join our awesome community over on the Zapworks forum if you have any comments or questions. I’ll keep dropping by to contribute to the discussions there too. I hope you enjoy using the updated World Tracking for WebAR implementation!
Co-Founder & Chief R&D Officer, Zappar