Google’s new YouTube Stories feature lets you swap out your background (no green screen required)


Google researchers know how much people like to trick others into thinking they’re on the moon, or that it’s night instead of day, and other fun shenanigans only possible if you happen to be in a movie studio in front of a green screen. So they did what any good 2018 coder would do: build a neural network that lets you do it.

This “video segmentation” tool, as they call it (well, everyone does) is rolling out to YouTube Stories on mobile in a limited fashion starting now — if you see the option, congratulations, you’re a beta tester.

A lot of ingenuity seems to have gone into this feature. It’s a piece of cake to figure out where the foreground ends and the background begins if you have a depth-sensing camera (like the iPhone X’s front-facing array) or plenty of processing time and no battery to think about (like a desktop computer).

On mobile, though, and with an ordinary RGB image, it’s not so easy to do. And if doing a still image is hard, video is even more so, since the computer has to do the calculation 30 times a second at a minimum.

Well, Google’s engineers took that as a challenge, and set up a convolutional neural network architecture, training it on thousands of labelled images like the one to the right.

The network learned to pick out the common features of a head and shoulders, and a series of optimizations lowered the amount of data it needed to crunch in order to do so. And — although it’s cheating a bit — the result of the previous calculation (so, a sort of cutout of your head) gets used as raw material for the next one, further reducing load.

The result is a fast, relatively accurate segmentation engine that runs more than fast enough to be used in video — 40 frames per second on the Pixel 2 and over 100 on the iPhone 7 (!).

This is great news for a lot of folks — removing or replacing a background is a great tool to have in your toolbox and this makes it quite easy. And hopefully it won’t kill your battery.

Technological solutions to technology’s problems feature in “How to Fix The Future”


In this edition of Innovate 2018, Andrew Keen finds himself in the hot seat.

Keen, whose new book, “How to Fix the Future”, was published earlier this month, discusses a moment when it has suddenly become fashionable for tech luminaries to abandon utopianism in favor of its opposite.  The first generation of IPO winners have now become some of tech’s most vocal critic—conveniently of new products and services launched by a younger generation of entrepreneurs.

For example, Tesla’s Elon Musk says that advances in Artificial Intelligence present a “fundamental risk to the existence of civilization.”  Salesforce CEO Marc Benioff believes Facebook ought to be regulated like a tobacco company because social media has become (literally?) carcinogenic.  And Russian zillionaire George Soros last week called Google “a menace to society.”

Eschewing much of the over-the-top luddism that now fills the New York Times (“Silicon Valley is Not Your Friends”), the Guardian (“The Tech Insiders Who Fear a Smartphone Dystopia”), and other mainstream media outlets, Keen proffers practical solutions to a wide range of tech-related woes.  These include persistent public and private surveillance, labor displacement, and fake news.

From experiments in Estonia, Switzerland, Singapore, India and other digital outposts, Keen distills these five tools for fixing the future:

  • Increased regulation, particularly through antitrust law
  • New innovations designed to solve the unintended side-effects of earlier disruptors
  • Targeted philanthropy from tech’s leading moneymakers
  • Modern social safety nets for displaced workers and disenfranchised consumers
  • Educational systems geared for 21st century life

YouTube will remove ads and downgrade discoverability of channels posting offensive videos


After barring Logan Paul earlier today from serving ads on his video channel, YouTube has now announced a more formal and wider set of sanctions it’s prepared to level on any creator that starts to post videos that are harmful to viewers, others in the YouTube community, or advertisers.

As it has done with Paul (on two occasions now), the site said it will remove monetization options on the videos, specifically access to advertising programs. But on top of that, it’s added in a twist that will be particularly impactful given that a lot of a video’s popularity rests on it being discoverable:

“We may remove a channel’s eligibility to be recommended on YouTube, such as appearing on our home page, trending tab or watch next,” Ariel Bardin, Vice President of Product Management at YouTube, writes in a blog post.

The full list of steps, as outlined by YouTube:

1. Premium Monetization Programs, Promotion and Content Development Partnerships. We may remove a channel from Google Preferred and also suspend, cancel or remove a creator’s YouTube Original.

2. Monetization and Creator Support Privileges. We may suspend a channel’s ability to serve ads, ability to earn revenue and potentially remove a channel from the YouTube Partner Program, including creator support and access to our YouTube Spaces.

3. Video Recommendations. We may remove a channel’s eligibility to be recommended on YouTube, such as appearing on our home page, trending tab or watch next.

The changes are significant not just because they could really hit creators where it hurts, but because they also point to a real shift for the platform. YouTube has long been known as a home for edgy videos filled with pranks and potentially offensive content, made in the name of comedy or freedom of expression.

Now, the site is turning over a new leaf, using a large team of human curators and AI to track the content of what’s being posted, and in cases where videos fall afoul of YouTube’s advertising guidelines, or pose a threat to its wider community, they have a much bigger chance of falling afoul of YouTube’s rules and getting dinged.

“When one creator does something particularly blatant—like conducts a heinous prank where people are traumatized, promotes violence or hate toward a group, demonstrates cruelty, or sensationalizes the pain of others in an attempt to gain views or subscribers—it can cause lasting damage to the community, including viewers, creators and the outside world,” writes Bardin. “That damage can have real-world consequences not only to users, but also to other creators, leading to missed creative opportunities, lost revenue and serious harm to your livelihoods. That’s why it’s critical to ensure that the actions of a few don’t impact the 99.9 percent of you who use your channels to connect with your fans or build thriving businesses.”

The moves come at a time when the site is making a much more concerted effort to raise the overall quality of what is posted and shared and viewed by millions of people every day, after repeated accusations that it has facilitated a range of bad actors, from people peddling propaganda to influence elections, to those who are posting harmful content aimed at children, to simply allowing cruel, tasteless and unusual videos to get posted in the name of comedy.

The issue seemed to reach a head with Paul, who posted a video in Japan in January that featured a suicide victim, and has since followed up with more questionable content presented as innocuous fun.

As I pointed out earlier today, even though he makes hundreds of thousands of dollars from ads (the exact amount is unknown and has only been estimated by different analytics companies) removing ads was only a partial sanction, since Paul monetizes in other ways, including merchandising. So it’s interesting to see YouTube adding more details and ways of sanctioning creators, that will hit at their very virality.

As in the case of Paul, YouTube makes a point of the fact that the majority of people who post content on its platform will not be impacted by today’s announcement because their content is not on the wrong side of acceptable. These sorts of sanctions, it said, will be applied as a last resort and will often not permanent but will last until the creator removes or alters content. It will be worth watching how and if this impacts video content overall on the platform.

YouTube tightens the rules around creator monetization and partnerships


In an effort to regain advertisers’ trust, Google is announcing what it says are “tough but necessary” changes to YouTube monetization.

For one thing, it’s setting a higher bar the YouTube Partner Program, which is what allows publishers to make money through advertising. Previously, they needed 10,000 total views to join the program. Starting today, channels also need to have 1,000 subscribers and 4,000 hours of view time in the past year. (For now, those are just requirements to join the program, but Google says it will also start applying them to current partners on February 20.)

This might assure marketers that their ads are less likely to run on random, fly-by-night channels, but as Google’s Paul Muret writes, “Of course, size alone is not enough to determine whether a channel is suitable for advertising.”

So in addition, he said:

We will closely monitor signals like community strikes, spam, and other abuse flags to ensure they comply with our policies. Both new and existing YPP channels will be automatically evaluated under this strict criteria and if we find a channel repeatedly or egregiously violates our community guidelines, we will remove that channel from YPP. As always, if the account has been issued three community guidelines strikes, we will remove that user’s accounts and channels from YouTube.

Muret also described changes planned for the more exclusive Google Preferred program, which is supposed to be limited to the best and most popular content. Vlogger Logan Paul was part of Google Preferred until the controversy over his “suicide forest” video got him kicked out last week — a story that suggests some of the limitations to Google’s approach.

Moving forward, Muret said the program will offer “not only … the most popular content on YouTube, but also the most vetted.” That means everything in Google Preferred should be manually curated, with ads only running “on videos that have been verified to meet our ad-friendly guidelines.” (Looks like all those new content moderators will be busy.)

Lastly, Muret said YouTube will be introducing a new “three-tier suitability system” in the next few months, aimed at giving marketers more control over the tradeoff between running ads in safer environments versus reaching more viewers.

As David Letterman’s first Netflix guest, Barack Obama warns against the ‘bubble’ of social media


David Letterman seems to be taking the title of his new Netflix show very seriously: On the very first episode of My Next Guest Needs No Introduction With David Letterman, he’s joined by former U.S. President Barack Obama.

The episode has plenty of funny moments, like Obama ribbing Letterman about his nearly Biblical beard. But they cover substantive political topics, too — not just during the onstage interview, but also in Letterman’s walk across Selma’s famous Edmund Pettus Bridge with Congressman John Lewis.

In fact, Letterman seems to be treating the new show as an opportunity to move a little bit away from his usual sardonic style and offer more depth and seriousness. He ended the interview by telling Obama, “Without a question of a doubt, you are the first president I really and truly respect.”

On the tech front, Obama repeated some of the points he made in a recent BBC interview with the U.K.’s Prince Harry. After being asked about threats to our democracy, Obama warned against “getting all your information off algorithms being sent through a phone.”

He noted that he owes much of his own political success to social media, which helped him build “what ended up being the most effective political campaign, probably in modern political history.” So he initially had “a very optimistic feeling” about the technology, but he said, “I think that what we missed was the degree to which people who are in power … special interests, foreign governments, etc., can in fact manipulate that and propagandize.”

[embedded content]

Obama then recounted a science experiment (“not a big scientific experiment, but just an experiment that somebody did during the revolution that was taking place in Egypt”) where a liberal, a conservative and a “quote-unquote moderate” were asked to search for “Egypt,” and Google presented each of them with very different results.

“Whatever your biases were, that’s where you were being sent, and that gets more reinforced over time,” he said. “That’s what’s happening with these Facebook pages where more and more people are getting their news from. At a certain point you just live in a bubble, and that’s part of why our politics is so polarized right now.”

Appropriately for a politician who was so closely associated with hope, Obama also offered some optimism: “I think it is a solvable problem, but I think it’s one that we have to spend a lot of time thinking about.”

It seems that Facebook and the other big platforms are at least trying to address the issue. Yesterday, for example, Facebook’s Mark Zuckerberg announced that the social network will be prioritizing “meaningful social interactions” over news and publisher content.