‘Post-reality’ video of CG imagery projected on a dancing man at high framerates


Not sure what there is to add to the headline, really. Well, I guess I should probably explain a bit.

Back in 2016 (on my birthday in fact) researchers from the University of Tokyo posted an interesting video showing a projector and motion tracking system working together to project an image onto moving, deforming surfaces like a flapping piece of paper or dancing person’s shirt.

Panasonic one-upped this with a more impressive display the next year, but the original lab has clapped back with a new video (spotted by New Atlas) that combines the awkwardness of academia with the awkwardness of dancing alone in the dark. And a quote from “The Matrix.”

Really though, it’s quite cool. Check out the hardware:

This dynamic projection mapping system, which they call DynaFlash v2, operates at 947 frames per second, using a depth-detection system running at the same rate to determine exactly where the image needs to be.

Not only does this let an image follow a person’s movement and orientation, but deformations in the material, such as stretching or the natural contortions of the body when moving.

The extreme accuracy of this process makes for strange possibilities. As Ishikawa Watanabe, the leader of the lab, puts it:

The capacity of the dynamic projection mapping linking these components is not limited to fusing colorful unrealistic texture to reality. It can freely reproduce gloss and unevenness of non-existing materials by adaptively controlling the projected image based on the three-dimensional structure and motion of the applicable surface.

Perhaps it’s easier to show you:

Creepy, right? It’s using rendering techniques most often seen in games to produce the illusion that there’s light shining on non-existent tubes on the dancer’s body. The illusion is remarkably convincing.

It’s quite a different approach to augmented reality, and while I can’t see it in many living rooms, it’s clearly too cool to go unused — expect this to show up in a few cool demos from tech companies and performance artists or musicians. I can’t wait to see what Watanabe comes up with next.

Google’s new YouTube Stories feature lets you swap out your background (no green screen required)


Google researchers know how much people like to trick others into thinking they’re on the moon, or that it’s night instead of day, and other fun shenanigans only possible if you happen to be in a movie studio in front of a green screen. So they did what any good 2018 coder would do: build a neural network that lets you do it.

This “video segmentation” tool, as they call it (well, everyone does) is rolling out to YouTube Stories on mobile in a limited fashion starting now — if you see the option, congratulations, you’re a beta tester.

A lot of ingenuity seems to have gone into this feature. It’s a piece of cake to figure out where the foreground ends and the background begins if you have a depth-sensing camera (like the iPhone X’s front-facing array) or plenty of processing time and no battery to think about (like a desktop computer).

On mobile, though, and with an ordinary RGB image, it’s not so easy to do. And if doing a still image is hard, video is even more so, since the computer has to do the calculation 30 times a second at a minimum.

Well, Google’s engineers took that as a challenge, and set up a convolutional neural network architecture, training it on thousands of labelled images like the one to the right.

The network learned to pick out the common features of a head and shoulders, and a series of optimizations lowered the amount of data it needed to crunch in order to do so. And — although it’s cheating a bit — the result of the previous calculation (so, a sort of cutout of your head) gets used as raw material for the next one, further reducing load.

The result is a fast, relatively accurate segmentation engine that runs more than fast enough to be used in video — 40 frames per second on the Pixel 2 and over 100 on the iPhone 7 (!).

This is great news for a lot of folks — removing or replacing a background is a great tool to have in your toolbox and this makes it quite easy. And hopefully it won’t kill your battery.

Twitter’s director of AR/VR leaves the company


The head of Twitter’s AR/VR team announced today via a tweet that he is leaving the social media site after 18 months.

Twitter hasn’t always been the quickest in its product development and the AR/VR scene (which is very much in its infancy still) hasn’t seen the company make too many daring moves. While Apple, Facebook, Snap and Google have shown off AR or VR developer platforms, there’s been little movement from Twitter in the arena.

The company has been slower to approach AR content creation features like selfie masks which have been on full display in competing products from both Snapchat and Facebook. The company’s biggest foray into virtual reality during the past couple years was likely the team’s work on Live 360 video in Periscope.

This has, more generally, been a period of a lot of movement in the AR/VR space largely as a result of companies reshaping their visions for how they see product and feature developments. Last week, Facebook brought on a new director of AR who previously worked at Google.

Featured Image: Kevin Quennesson/Twitter