Over the past few weeks I’ve been steeping myself in the developer and investor community that is quickly sprouting up around ARKit.
There are a variety of reasons that people are excited about the possibilities but it’s safe to say that the number one positive that’s shared by everyone is the sheer scale of possible customers that will be able to experience augmented reality on day one of iOS 11. Hundreds of millions of potential users before the year is out is a potent pitch.
I’ve seen some very cool things from one- and two-person teams, and I’ve seen big corporate developers flex their muscle and get pumped about how capable AR can be.
At a round robin demo event yesterday with a bunch of developers of AR apps and features, I got a nice cross-section of looks at possible AR applications. Though all of them were essentially consumer focused, there was an encouraging breadth to their approaches and some interesting overall learnings that will be useful for developers and entrepreneurs looking to leverage ARKit on iOS.
Let me blast through some impressions first and then I’ll note a few things.
What it does: Allows you to place actual size replicas of IKEA sofas and armchairs into your house. 2,000 items will be available at launch.
How it works: You tap on a catalog that lets you search and select items. You tap once to have it hover over your floor, rotate with a finger and tap again to place. The colors and textures are accurately represented and these are fully re-worked 3D models from IKEA’s 3D scans used for its catalogs. It looks and works great, just as you’d expect. IKEA Leader of Digital Transformation Michael Valdsgaard says that it took them about 7 weeks or so, beginning slightly before Apple’s announcement of ARKit, to implement the mode. It will be exclusive to iOS for now because it’s the largest single target of AR capable devices. I asked Valdsgaard how long it took to get a first version up and running and he said just a couple of weeks. This has been a holy grail for furniture and home goods manufacturers and sales apps for what seems like forever, and it’s here.
What it does: Lets you place and decorate virtual desserts like cupcakes. Allows you to access the recipe for the base dessert.
How it works: You drop a dessert onto a surface and are provided with a bunch of options that let you decorate a cupcake. A couple of things about this demo: First, it worked just fine and was very cute. A little animated whale and some googley eyes topping a cupcake which you can then share is fine. However, it also demonstrates how some apps will be treating AR as a “fun extra” (the button is literally labeled “Fun”), rather than integral to the experience. This is to be expected in any first wave of a new technology, but examples out there like KabaQ show that there are other opportunities in food.
What it does: Allows you to place gifs in 3D space, share videos of them or even share the whole 3D scene in AR with friends who have the app. They can then add, remix and re-share new instances of the scene. As many people as you want can collaborate on the space.
How it works: You drop gifs into the world in the exact position you want them. A curated and trending mix of gifs that have transparency built into them is the default, but you can also flip it over to place any old Gif on the platform. Every scene gets a unique URL that can be remixed and added to by people that you share it with, effectively creating a shared gif space that can be ping-pinged around. The placement of gifs felt very logical and straightforward, but the ability to “paint” with the gifs and then share the scenes whole in a collaborative fashion was a pleasant surprise. One example that was impressive was leaving a pathway to a “message” that a friend could follow when you shared the scene to them. Ralph Bishop, GIPHY’s head of design, says that the app will be free like their other apps are but will have branded partners providing some content. GIPHY has something interesting going on here with a social AR experience. It’s early days but this seems promising.
What it does: It’s a game from Climax Studios that places a (scalable) 3D world full of crumbling ruins onto your tabletop that you help your character navigate through without any traditional controls.
How it works: You look through your device like a viewport and align the perspective of the various pathways to allow your character to progress. There are no on-screen controls at all, which is a very interesting trend. Climax CEO Simon Gardner says that translating the game into AR was attractive to the studio (which has been around for 30 years) was the potentially huge install base of ARKit. They’re able to target hundreds of millions of potential customers by implementing a new technology, which is not the typical scenario where you start at effectively zero. The experience was also highly mobile, requiring that you move around the scenes to complete them. Some AR experiences may very well be limited in their use or adoption because many people use phones in places where they are required to be stationary.
What it does: Translates the incredibly popular children’s book into AR.
How it works: The story unfolds by launching the app and simply pointing at objects in the scene. We saw just a small portion of the app that had apples being coaxed from a tree and the caterpillar scooching its way through them to grow larger. This was my favorite demo of the day, largely because it was cute, clever and just interactive enough for the age level it is targeting. It’s also another ‘zero controls’ example, which is wonderful for small children. Touch Press CEO Barry O’Neill says that they’ve seen some very interesting behavior from kids using the app including getting right down at eye level with the tiny caterpillar — which meant that they really had to up-res the textures and models to keep them looking great. Now that ARKit enables capturing any plane and remembering where objects are (even if you move 30-50 feet away and come back), storytelling in AR is finally moving beyond marker-based book enhancements. Any surface is a book and can tell a story.
What it does: It’s a location-aware shooter that has you turning in place to mow down zombies with various weaponry.
How it works: The scene I saw looked pretty solid, with high resolution zombies coming at you from all angles, forcing you to move and rotate to dodge and fend them off. You progress by “rescuing” survivors from the show which provide you with unique additional capabilities. Environmental enhancements like virtual “sewers” that walkers can crawl up out of give each scene a unique feel. It looked fast and smooth on a demo iPad. AMC and Next Games collaborated on this title. There were some additional fun features like the ability to call up various poses on a survivor like Michonne and stand next to them to take a selfie — which felt super cool. The best kinds of IP-based games and apps will focus on unlocking these kinds of “bring that world into your world’ experiences” rather than cookie cutter gameplay.
Some interesting notes:
Every app had its own unique take on the ‘scanning’ process that allows ARKit to perform plane detection before it can begin placing objects into the world. Basically a few seconds where you’re encouraged to move the phone around a bit to find flat surfaces and record enough points of data to place and track objects. It’s not onerous and never took more than a few seconds, but it is something that users will have to be educated on. Ikea’s conversational interface prompted people to “scan” the room; The Walking Dead suggested that you “search for a survivor” and Food Network’s app went with a “move around!” badge. Everyone will have to think about how to prompt and encourage this behavior in a user to make ARKit work properly.
Aside from the apps that are about placing objects directly into the scene, there is a focus on little-to-no on-screen controls. For Arise, your perspective is the control, allowing you to get an alignment that worked to progress the character. There are no buttons or dials on the screen by design.
The Very Hungry Caterpillar’s control methodology was based on focus. The act of pointing at an object and leaving your gaze on it caused the story to progress and actions to be taken (knocking fruit out of a tree for the caterpillar to munch on or encouraging it to go to sleep on a stump). Most of the other apps relied on something as simple as a single tap for most actions. I think this control-free or control-light paradigm will be widespread. It will require some rethinking for many apps being translated.
Incredibly short, all things considered. Some of the apps I saw were created or translated into ARKit nearly whole sale within 7-10 weeks. For asset-heavy apps like games this will obviously be a tougher ramp, but not if you already have the assets. GIPHY World, for instance, places a bunch of curated gifs that look great floating in the world at your fingertips, but you can easily drop regular gifs in there from their millions of options.
Models that Touch Press used for its previous Caterpillar app had to be upscaled in terms of complexity and detail quite a bit because they fully expect children to experience them in distances as close as inches. IKEA also had to update its models and textures. But given that the onramp is measured in days or weeks instead of months, I’d expect to see a large number of apps supporting ARKit experiences at the launch of iOS 11 in September. And for a bunch to follow quickly.
Live Now : Irine