This robo-bug can improvise its walk like a real insect


There are plenty of projects out there attempting to replicate the locomotion of insects, but one thing that computers and logic aren’t so good at is improvising and adapting the way even the smallest, simplest bugs do. This project from Tokyo Tech is a step in that direction, producing gaits on the fly that the researchers never programmed in.

“Perhaps the most exciting moment in the research was when we observed the robot exhibit phenomena and gaits which we neither designed nor expected, and later found out also exist in biological insects,” enthused the lead researcher, Ludovico Minati, in a news release.

One could program an immensely complicated AI or pattern generator to respond instantly to any of a thousand situations. But if a bug with a brain the size of a grain of sand can adapt to new situations quickly and smoothly, there must be a simpler, more analog way.

Different gaits produced by different patterns — okay, they don’t look that different, but they definitely are.

That’s what Minati was looking into, and his hexapod robot is certainly a simpler approach. A central pattern generator produces a master signal, which is interpreted by analog arrays and sent to the oscillators that move the legs. All it takes is tweaking one of five basic parameters and the arrays reconfigure their circuits and produce a working gait.

“An important aspect of the controller is that it condenses so much complexity into only a small number of parameters. These can be considered high-level parameters, in that they explicitly set the gait, speed, posture, etc.,” said one of Minati’s colleagues, Yasaharu Koike.

Simplifying the hardware and software needed for adaptable, reliable locomotion could ease the creation of small robots and their deployment in unfamiliar terrain. The paper describing the project is published in IEEE Access.

‘Post-reality’ video of CG imagery projected on a dancing man at high framerates


Not sure what there is to add to the headline, really. Well, I guess I should probably explain a bit.

Back in 2016 (on my birthday in fact) researchers from the University of Tokyo posted an interesting video showing a projector and motion tracking system working together to project an image onto moving, deforming surfaces like a flapping piece of paper or dancing person’s shirt.

Panasonic one-upped this with a more impressive display the next year, but the original lab has clapped back with a new video (spotted by New Atlas) that combines the awkwardness of academia with the awkwardness of dancing alone in the dark. And a quote from “The Matrix.”

Really though, it’s quite cool. Check out the hardware:

This dynamic projection mapping system, which they call DynaFlash v2, operates at 947 frames per second, using a depth-detection system running at the same rate to determine exactly where the image needs to be.

Not only does this let an image follow a person’s movement and orientation, but deformations in the material, such as stretching or the natural contortions of the body when moving.

The extreme accuracy of this process makes for strange possibilities. As Ishikawa Watanabe, the leader of the lab, puts it:

The capacity of the dynamic projection mapping linking these components is not limited to fusing colorful unrealistic texture to reality. It can freely reproduce gloss and unevenness of non-existing materials by adaptively controlling the projected image based on the three-dimensional structure and motion of the applicable surface.

Perhaps it’s easier to show you:

Creepy, right? It’s using rendering techniques most often seen in games to produce the illusion that there’s light shining on non-existent tubes on the dancer’s body. The illusion is remarkably convincing.

It’s quite a different approach to augmented reality, and while I can’t see it in many living rooms, it’s clearly too cool to go unused — expect this to show up in a few cool demos from tech companies and performance artists or musicians. I can’t wait to see what Watanabe comes up with next.

Google’s new YouTube Stories feature lets you swap out your background (no green screen required)


Google researchers know how much people like to trick others into thinking they’re on the moon, or that it’s night instead of day, and other fun shenanigans only possible if you happen to be in a movie studio in front of a green screen. So they did what any good 2018 coder would do: build a neural network that lets you do it.

This “video segmentation” tool, as they call it (well, everyone does) is rolling out to YouTube Stories on mobile in a limited fashion starting now — if you see the option, congratulations, you’re a beta tester.

A lot of ingenuity seems to have gone into this feature. It’s a piece of cake to figure out where the foreground ends and the background begins if you have a depth-sensing camera (like the iPhone X’s front-facing array) or plenty of processing time and no battery to think about (like a desktop computer).

On mobile, though, and with an ordinary RGB image, it’s not so easy to do. And if doing a still image is hard, video is even more so, since the computer has to do the calculation 30 times a second at a minimum.

Well, Google’s engineers took that as a challenge, and set up a convolutional neural network architecture, training it on thousands of labelled images like the one to the right.

The network learned to pick out the common features of a head and shoulders, and a series of optimizations lowered the amount of data it needed to crunch in order to do so. And — although it’s cheating a bit — the result of the previous calculation (so, a sort of cutout of your head) gets used as raw material for the next one, further reducing load.

The result is a fast, relatively accurate segmentation engine that runs more than fast enough to be used in video — 40 frames per second on the Pixel 2 and over 100 on the iPhone 7 (!).

This is great news for a lot of folks — removing or replacing a background is a great tool to have in your toolbox and this makes it quite easy. And hopefully it won’t kill your battery.

Intel ships update for newest Spectre-affected chips


Intel has announced that the fix is out for its latest chips affected by Spectre, the memory-leakage flaw affecting practically all computing hardware. The patch is for the Skylake generation (late 2015) and newer, though most users will still have to wait for the code to be implemented by whoever manufactured their computer (specifically, their motherboard).

The various problems presented in January by security researchers have to be addressed by a mix of fixes at the application, OS, kernel and microarchitecture level. This patch is the latter, and it replaces an earlier one that was found to be unstable.

These aren’t superficial tweaks and they’re being made under pressure, so some hiccups are to be expected — but Intel is also a huge company that has had months of warning to get this right, so people may be frustrated by the less-than-optimal way the flaws have been addressed.

As before, there isn’t much you as a user can do except make sure that you are checking frequently to make sure your PC and applications are up to date — in addition, of course, to not running any strange code.

If you’re on an older chipset, like Sandy Bridge, you’ll have to wait a bit longer — your fix is still in beta. You don’t want to be their test machine.

Featured Image: Alice Bevan–McGregor/Flickr UNDER A CC BY 2.0 LICENSE

Humans sow seeds of destruction by abusing poor robot just trying to walk through a door


You may remember last week that Boston Dynamics’ latest robot, a trim and yellow Spot Mini, has learned to open doors. Seems like at this point you don’t want to anger them. They will find you. Or rather, they’ll find the company’s designated robot abuser and punish him for stopping poor robots from doing what they have been told to do. Soon Spot Mini will not be the one on a leash.

Of course this is necessary testing for when the robot actually has to navigate a busy environment and this kind of interference comes up, deliberate or accidental. A robot has to be able to bounce back, of course; if it freaks out when anything unexpected happens, it won’t be able to accomplish much.

[embedded content]

All the same, it’s excruciating to watch, at least for me. It’s just trying so hard and it doesn’t get angry at the man, even when its piece falls off!

Perhaps robots will be superhuman in patience as well as door-opening ability. Let’s hope so. I kicked a Spot Mini once at its creator’s invitation and if any machines are reading this, I’m sorry.

Featured Image: Boston Dynamics