Objective
Discover what it’s like to interact with objects in the real world, using various computer generated assets, whether it’s throwing CG donuts into a coffee cup, a robotic overlay on your own face, or dodging a multitude of donuts.
Examples
- Zombie donuts! Find one after tossing enough donuts into the coffee. Occlusion testing as well.
- Lens Studio test to see if there were any advanced facial mapping capabilities (there isn’t, at least not to general public).
- Raining Donuts! Basic interactions and plane detection discovery. What can we cheat? (not much).
- A series of videos showing the backend of Google Cloud anchors in action with a firebase I setup on iOS
- CoreML + ARKit playtest
Examples built with Unity, Vuforia, Lens Studio, and ARKit.