Instagram is testing a feature that will show users when someone else takes a screenshot of their story. Users included in the test are getting a warning that the next time they take a screenshot of a friend’s story the friend will be able to see it, as shown below:
And users who are participating in the test can see who took a screenshot of their story by going to the list of story viewers and seeing a new camera shutter logo next to anyone who took a screenshot of their photo. To be clear, creators won’t get a specific notification when someone takes a screenshot of their story, it will only show up in their list of story viewers.
In a statement provided to TechCrunch Instagram acknowledged the test, saying “we are always testing ways to improve the experience on Instagram and make it easier to share any moment with the people who matter to you.”
Instagram is likely using this test to see if the feature has any noticeable impact on engagement, before deciding whether or not they’ll roll it out to all users. For example, there’s a chance that some users may end up watching less stories over time if they aren’t able to take screenshots without notifying the creator.
Prior to this test the only screenshot notifications on Instagram were when someone took a screenshot of a private direct message. Anyone could take a screenshot of someone’s photo or story without notifying the creator. Notably, users can rewatch stories as many times as they want within 24 hours, with the creator unable to see exactly how many times one person watched it.
If rolled out, this feature would essentially align Instagram with Snapchat in terms of how the platform deals with screenshots. Any screenshot of a direct message triggers a notification to the sender, but a screenshot of a story will just result in a notation being placed next to the offender’s name in the viewer analytics tab.
After barring Logan Paul earlier today from serving ads on his video channel, YouTube has now announced a more formal and wider set of sanctions it’s prepared to level on any creator that starts to post videos that are harmful to viewers, others in the YouTube community, or advertisers.
As it has done with Paul (on two occasions now), the site said it will remove monetization options on the videos, specifically access to advertising programs. But on top of that, it’s added in a twist that will be particularly impactful given that a lot of a video’s popularity rests on it being discoverable:
“We may remove a channel’s eligibility to be recommended on YouTube, such as appearing on our home page, trending tab or watch next,” Ariel Bardin, Vice President of Product Management at YouTube, writes in a blog post.
The full list of steps, as outlined by YouTube:
1. Premium Monetization Programs, Promotion and Content Development Partnerships. We may remove a channel from Google Preferred and also suspend, cancel or remove a creator’s YouTube Original.
2. Monetization and Creator Support Privileges. We may suspend a channel’s ability to serve ads, ability to earn revenue and potentially remove a channel from the YouTube Partner Program, including creator support and access to our YouTube Spaces.
3. Video Recommendations. We may remove a channel’s eligibility to be recommended on YouTube, such as appearing on our home page, trending tab or watch next.
The changes are significant not just because they could really hit creators where it hurts, but because they also point to a real shift for the platform. YouTube has long been known as a home for edgy videos filled with pranks and potentially offensive content, made in the name of comedy or freedom of expression.
Now, the site is turning over a new leaf, using a large team of human curators and AI to track the content of what’s being posted, and in cases where videos fall afoul of YouTube’s advertising guidelines, or pose a threat to its wider community, they have a much bigger chance of falling afoul of YouTube’s rules and getting dinged.
“When one creator does something particularly blatant—like conducts a heinous prank where people are traumatized, promotes violence or hate toward a group, demonstrates cruelty, or sensationalizes the pain of others in an attempt to gain views or subscribers—it can cause lasting damage to the community, including viewers, creators and the outside world,” writes Bardin. “That damage can have real-world consequences not only to users, but also to other creators, leading to missed creative opportunities, lost revenue and serious harm to your livelihoods. That’s why it’s critical to ensure that the actions of a few don’t impact the 99.9 percent of you who use your channels to connect with your fans or build thriving businesses.”
The moves come at a time when the site is making a much more concerted effort to raise the overall quality of what is posted and shared and viewed by millions of people every day, after repeated accusations that it has facilitated a range of bad actors, from people peddling propaganda to influence elections, to those who are posting harmful content aimed at children, to simply allowing cruel, tasteless and unusual videos to get posted in the name of comedy.
The issue seemed to reach a head with Paul, who posted a video in Japan in January that featured a suicide victim, and has since followed up with more questionable content presented as innocuous fun.
As I pointed out earlier today, even though he makes hundreds of thousands of dollars from ads (the exact amount is unknown and has only been estimated by different analytics companies) removing ads was only a partial sanction, since Paul monetizes in other ways, including merchandising. So it’s interesting to see YouTube adding more details and ways of sanctioning creators, that will hit at their very virality.
As in the case of Paul, YouTube makes a point of the fact that the majority of people who post content on its platform will not be impacted by today’s announcement because their content is not on the wrong side of acceptable. These sorts of sanctions, it said, will be applied as a last resort and will often not permanent but will last until the creator removes or alters content. It will be worth watching how and if this impacts video content overall on the platform.
Instagram copied the ‘Snap’ and now it might be going after the ‘chat’. A video calling feature was spotted in an non-public version of Instagram by WhatsApp industry blog WABetaInfo. It would let users who’ve begun an Instagram Direct message thread to video chat with each other. That could let users spend even more time in the app, but by actively communicating, rather that passively browsing which Facebook has come to admit isn’t good for people’s well-being.
For now, though, Instagram it’s refusing to comment. When asked about the feature, a spokesperson told TechCrunch “We don’t comment on rumors and speculation”. That’s different than it’s more affirmative boilerplate statement given when it does confirm tests of forthcoming features, “we’re always testing new experiences for the Instagram community.” That’s what the company told us earlier this month when we reported Instagram’s partnership with Giphy for Stories GIFs…which launched a week later. This video calling feature might never launch.
But Instagram already lets people call in via video to each other’s Live Stories like they’re on a TV talk show, and send short ephemeral video clips over Direct. Instagram recently launched a standalone Direct messaging app. And video calling has become one of the most popular features of Instagram parent Facebook’s Messenger app — with 17 billion video chats occurring in 2017, up 2X from 2016.
So given that Instagram has the capability, interest, and infrastructure to add video calling, why wouldn’t it? WABetaInfo spotted the video call button in the top right of the chat screen, with it only available when messaging with people who’ve already accepted your Direct request.
Leaked usage data from The Daily Beast’s Taylor Lorenz outed how Snapchat Stories sharing has stopped growing, in part because of competition from Instagram Stories, but users are still addicted to Snapchat’s chat feature. Snapchat offers audio and video calling as well as photo, audio clip, video clip, and text messaging, effectively making it an alternative to one’s phone itself.
Messaging is the center of the mobile experience, generating the most device opens and time spent. As Facebook tries to shift the behaviors it instills from harmful, zombie-like scrolling to real interpersonal interaction, doubling down on messaging is a clear path. And Facebook’s apps are always hungry for younger users who might not have phone numbers or bountiful mobile plans, and therefore might especially benefit from this new feature.
Now we’ll have to wait and see whether soon you’ll be calling friends on the Insta-phone. Or is it the Phonogram?
At least one Facebook employee has been interviewed by special counsel Robert Mueller as part of his investigation into potential Russian interference with the 2016 election, reports Wired. But don’t put on your conspiracy hats just yet.
Wired’s source indicated that the Facebook staffer was associated with the Trump campaign, which could mean just about anything. For a major spender on social media, which that campaign certainly was, it is common for Facebook, Google, Twitter, and other properties selling ads to have a liaison with the client.
Since Facebook is also up to its eyes on Russia-related inquiries, it makes perfect sense that someone acting as go-between or advisor for the company and the campaign would be interviewed as a matter of course. Certainly no wrongdoing is implied.
The Facebook staffer would be the primary source for any information relating to Trump campaign spending, including whether or not there was any knowledge of or involvement in the Russian side of things — again, not to imply anything, just to say if there’s anything to know, that person would know it.
As Facebook was more strongly targeted by Russian bots and trolls during the election than its rivals, it makes sense that it would be pulled in like this, but don’t be surprised if others have a chance to chat with the special counsel’s team as well. I’ve asked Facebook for comment.
Featured Image: Alex Wong/Getty Images
Reddit has finally joined other major web properties in adding two-factor authentication for all users. It’s been available for mods and some testers for a while, but this is the first time the vast multitudes of redditors will have access to it.
Turn it on and you’ll have to enter a six-digit code sent to your phone whenever you have a new login attempt. You’ll need Google Authenticator, Authy, or any TOTP-supporting auth app — texting codes is no longer recommended (and really, it was always a bad idea).
There’s not much to setting it up: go into the password/email area of the site’s preferences once you’ve logged in on a desktop browser. Enable two-factor authentication and follow the instructions.
Now, this may be a problem for power users, who might have trouble switching between the one they use for ordinary browsing and the one they use to post racist comments on every post they can, or the one they use to vehemently disagree with a headline without reading the article. But that’s the price of security.
Featured Image: REUTERS/Robert Galbraith