ARkit — what is it?
Almost two weeks have passed after WWDC 2017 starts in San Jose, California, where we have been met with the new Apple Augmented Reality…
Almost two weeks have passed after WWDC 2017 starts in San Jose, California, where we have been met with the new Apple Augmented Reality technology called ARKit.
Let’s go a little bit deeper into the way it technically works.
The silver bullet of Apple in the framework of ARKit is what accuracy they managed to achieve in determining the position of a virtual object in real space. For this, ARKit uses the data from the camera sensor and CoreMotion data (accelerometer, gyroscope, and magnetometer). Thanks to that, your device detects and remembers the horizontal surfaces in the field of view of your camera, and then fixes on them the virtual objects you selected with the help of special points called AR Anchors.
Alas, ARKit can identify horizontal surfaces only, but handy developers will always find a way using some developing magic and voodoo spells.
It is important that the user does not need to scan the surface before starting. All this happens automatically during the session. First, the device finds one part of the surface, then another, and so on, combining the nearby smaller surfaces into the larger one.
What could make your AR experience even more immersive? Light estimation.
Using a light sensor, your device can take data from the ambient light and tries to reflect that onto the virtual object, to make it look like it belongs.
What’s next?
Next, we are hoping to see an excellent working technology with uploads of high-quality content on it. The benefit of the audience and the resources of Apple can give a good impetus and support to this endeavor.
For example, Luden.io is already in the middle of developing a new AR-title. So, stay in touch, we will tell you about it very soon!
P.S. Also you can subscribe on our Telegram channel about AR. Hot news, gossips, reviews, and other cool stuff.