Monday, April 30, 2018

Twitter announces new video partnerships with NBCUniversal and ESPN

Twitter is hosting its Digital Content NewFronts tonight, where it’s unveiling 30 renewals and new content deals — the company says that’s nearly twice as many as it announced last year.

Those include partnerships with the big players in media — starting with NBCUniversal, which will be sharing live video and clips from properties including NBC News, MSNBC, CNBC and Telemundo.

Twitter also announced some of the shows it will be airing as part of the ESPN deal announced earlier today: SportsCenter Live (a Twitter version of the network’s flagship) and Fantasy Focus Live (a livestream of the fantasy sports podcast).

Plus, the company said it’s expanding its existing partnership with Viacom with shows like Comedy Central’s Creator’s Room, BET Breaks and MTV News.

During the NewFronts event, Twitter’s head of video Kayvon Beykpour said daily video views on the platform have nearly doubled in the past year. And Kay Madati (pictured above), the company’s head of content partnerships, described the company as “the ultimate mobile platform where video and conversation share the same screen.”

As Twitter continues to invest in video content, it’s been emphasizing its advantage in live video, a theme that continued in this year’s announcement.

“Twitter is the only place where conversation is tied to video and the biggest live moments, giving brands the unique ability to connect with leaned in consumers who are shaping culture,” said Twitter Global VP of Revenue and Content Partnerships Matthew Derella in a statement. “That’s our superpower.”

During the event, Derella also (implicitly) contrasted Twitter with other digital platforms that have struggled with questions about transparency and whether ads are running in an appropriate environment. Tonight, he said marketers could say goodbye to unsafe brand environments and a lack of transparency: “And we say hello to you being in control of where your video aligns … we say hello to a higher measure of transparency, we say hello to new premium inventory and a break from the same old choices.”

On top of all the new content, Twitter is also announcing new ad programs. There are Creator Originals, a set of scripted series from influencers who will be paired up with sponsored brands. (The program is powered by Niche, the influencer marketing startup that Twitter acquired a few years ago.) And there’s a new Live Brand Studio — as the name suggests, it’s a team that works with marketers to create live video.

Here are some other highlights from the content announcements:

  • CELEBrate, a series where people get heartwarming messages from their idols from Ellen Digital Studios.
  • Delish Food Day and IRL from Heart Magazines Digital Media
  • Power Star Live, which is “inspired by the cultural phenomenon of Black Twitter” and livestreamed from he Atlanta University Center, from Will Packer Media.
  • BuzzFeed News is renewing AM to DM until the end of 2018.
  • Pattern, a new brand focused on weather- and science-related news.
  • Programming the Huffington Post (which, like TechCrunch, is owned by Verizon/Oath), History, Vox and BuzzFeed News that highlights women around the world.

Developing



from Social – TechCrunch https://ift.tt/2ji2tWE

WhatsApp boss and co-founder Jan Koum to quit

Jan Koum says he wants to pursue new projects, but reports say he clashed with parent company Facebook.

from BBC News - Technology https://ift.tt/2FuuhzS

WhatsApp CEO Jan Koum quits Facebook due to privacy intrusions

“It is time for me to move on . . . I’m taking some time off to do things I enjoy outside of technology, such as collecting rare air-cooled Porsches, working on my cars and playing ultimate frisbee” WhatsApp co-founder, CEO, and Facebook board member Jan Koum wrote today. The announcement followed shortly after The Washington Post reported that Koum would leave due to disagreements with Facebook management about WhatsApp user data privacy and weakened encryption. Koum obscured that motive in his note that says “I’ll still be cheering WhatsApp on – just from the outside.”

Facebook CEO Mark Zuckerberg quickly commented on Koum’s Facebook post about his departure, writing “Jan: I will miss working so closely with you. I’m grateful for everything you’ve done to help connect the world, and for everything you’ve taught me, including about encryption and its ability to take power from centralized systems and put it back in people’s hands. Those values will always be at the heart of WhatsApp.” That comment further tries to downplay the idea that Facebook pushed Koum away by trying to erode encryption.

It’s currently unclear who will replace Koum as WhatsApp’s CEO, and what will happen to his Facebook board seat.

Values Misaligned

Koum sold WhatsApp to Facebook for in 2014 for a jaw-dropping $19 billion. But since then it’s more than tripled its user count to 1.5 billion, making the price to turn messaging into a one-horse race seem like a steal. But at the time, Koum and co-founder Brian Acton were assured that WhatsApp wouldn’t have to run ads or merge its data with Facebook’s. So were regulators in Europe where WhatsApp is most popular.

A year and a half later, though, Facebook pressured WhatsApp to change its terms of service and give users’ phone numbers to its parent company. That let Facebook target those users with more precise advertising, such as by letting businesses upload list of phone numbers to hit those people with promotions. Facebook was eventually fined $122 million by the European Union in 2017 — a paltrey sum for a company earning over $4 billion in profit per quarter.

But the perceived invasion of WhatsApp user privacy drove a wedge between Koum and the parent company. Acton left Facebook in November, and has publicly supported the #DeleteFacebook movement since.

WashPo writes that Koum was also angered by Facebook executives pushing for a weakening of WhatsApp’s end-to-end encryption in order to facilitate its new WhatsApp For Business program. It’s possible that letting multiple team members from a business all interact with its WhatsApp account could be incompatible with strong encryption. Facebook plans to finally make money off WhatsApp by offering bonus services to big companies like airlines, e-commerce sites, and banks that want to conduct commerce over the chat app.

Jan Koum, the CEO and co-founder of WhatsApp speaks at the Digital Life Design conference on January 18, 2016, in Munich, south Germany.
On the Innovation Conference high-profile guests discuss for three days on trends and developments relating to the digitization. (Photo: TOBIAS HASE/AFP/Getty Images)

Koum was heavily critical of advertising in apps, once teling Forbes that “Dealing with ads is depressing . . . You don’t make anyone’s life better by making advertisements work better.” He vowed to keep them out of WhatsApp. But over the past year, Facebook has rolled out display ads in the Messenger inbox. Without Koum around, Facebook might push to expand those obtrusive ads to WhatsApp as well.

The high-profile departure comes at a vulnerable time for Facebook, with its big F8 developer conference starting tomorrow despite Facebook simultaneously shutting down parts of its dev platform as penance for the Cambridge Analytica scandal. Meanwhile, Google is trying to fix its fragmented messaging strategy, ditching apps like Allo to focus on a mobile carrier-backed alternative to SMS it’s building into Android Messages.

While the News Feed made Facebook rich, it also made it the villain. Messaging has become its strongest suit thanks to the dual dominance of Messenger and WhatsApp. Considering many users surely don’t even realize WhatsApp is own by Facebook, Koum’s departure over policy concerns isn’t likely to change that. But it’s one more point in what’s becoming a thick line connecting Facebook’s business ambitions to its cavalier approach to privacy.

You can read Koum’s full post below.

It's been almost a decade since Brian and I started WhatsApp, and it's been an amazing journey with some of the best…

Posted by Jan Koum on Monday, April 30, 2018



from Social – TechCrunch https://ift.tt/2jigy6r

Facebook is trying to block Schrems II privacy referral to EU top court

Facebook’s lawyers are attempting to block a High Court decision in Ireland, where its international business is headquartered, to refer a long-running legal challenge to the bloc’s top court.

The social media giant’s lawyers asked the court to stay the referral to the CJEU today, Reuters reports. Facebook is trying to appeal the referral by challenging Irish case law — and wants a stay granted in the meanwhile.

The case relates to a complaint filed by privacy campaigner and lawyer Max Schrems regarding a transfer mechanism that’s currently used by thousands of companies to authorize flows of personal data on EU citizens to the US for processing. Though Schrems was actually challenging the use of so-called Standard Contractual Clauses (SCCs) by Facebook, specifically, when he updated an earlier complaint on the same core data transfer issue — which relates to US government mass surveillance practices, as revealed by the 2013 Snowden disclosures — with Ireland’s data watchdog.

However the Irish Data Protection Commissioner decided to refer the issue to the High Court to consider the legality of SCCs as a whole. And earlier this month the High Court decided to refer a series questions relating to EU-US data transfers to Europe’s top court — seeking a preliminary ruling on a series of fundamental questions that could even unseat another data transfer mechanism, called the EU-US Privacy Shield, depending on what CJEU judges decide.

An earlier legal challenge by Schrems — which was also related to the clash between US mass surveillance programs (which harvest data from social media services) and EU fundamental rights (which mandate that web users’ privacy is protected) — resulted in the previous arrangement for transatlantic data flows being struck down by the CJEU in 2015, after standing for around 15 years.

Hence the current case being referred to by privacy watchers as ‘Schrems II’. You can also see why Facebook is keen to delay another CJEU referral if it can.

According to comments made by Schrems on Twitter the Irish High Court reserved judgement on Facebook’s request today, with a decision expected within a week…

Facebook’s appeal is based on trying to argue against Irish case law — which Schrems says does not allow for an appeal against such a referral, hence he’s couching it as another delaying tactic by the company:

We reached out to Facebook for comment on the case. At the time of writing it had not responded.

In a statement from October, after an earlier High Court decision on the case, Facebook said:

Standard Contract Clauses provide critical safeguards to ensure that Europeans’ data is protected once transferred to companies that operate in the US or elsewhere around the globe, and are used by thousands of companies to do business. They are essential to companies of all sizes, and upholding them is critical to ensuring the economy can continue to grow without disruption.

This ruling will have no immediate impact on the people or businesses who use our services. However it is essential that the CJEU now considers the extensive evidence demonstrating the robust protections in place under Standard Contractual Clauses and US law, before it makes any decision that may endanger the transfer of data across the Atlantic and around the globe.



from Social – TechCrunch https://ift.tt/2r8Wqag

Twitter also sold data access to Cambridge Analytica researcher

Since it was revealed that Cambridge Analytica improperly accessed the personal data of millions of Facebook users, one question has lingered in the minds of the public: What other data did Dr. Aleksandr Kogan gain access to?

Twitter confirmed to The Telegraph on Saturday that GSR, Kogan’s own commercial enterprise, had purchased one-time API access to a random sample of public tweets from a five-month period between December 2014 and April 2015. Twitter told Bloomberg that, following an internal review, the company did not find any access to private data about people who use Twitter.

Twitter sells API access to large organizations or enterprises for the purposes of surveying sentiment or opinion during various events, or around certain topics or ideas.

Here’s what a Twitter spokesperson said to The Telegraph:

Twitter has also made the policy decision to off-board advertising from all accounts owned and operated by Cambridge Analytica. This decision is based on our determination that Cambridge Analytica operates using a business model that inherently conflicts with acceptable Twitter Ads business practices. Cambridge Analytica may remain an organic user on our platform, in accordance with the Twitter Rules.

Obviously, this doesn’t have the same scope as the data harvested about users on Facebook. Twitter’s data on users is far less personal. Location on the platform is opt-in and generic at that, and users are not forced to use their real name on the platform.

Still, it shows just how broad the Cambridge Analytica data collection was ahead of the 2016 election.

We reached out to Twitter and will update when we hear back.



from Social – TechCrunch https://ift.tt/2HE9dJ2

Europe eyeing bot IDs, ad transparency and blockchain to fight fakes

European Union lawmakers want online platforms to come up with their own systems to identify bot accounts.

This is as part of a voluntary Code of Practice the European Commission now wants platforms to develop and apply — by this summer — as part of a wider package of proposals it’s put out which are generally aimed at tackling the problematic spread and impact of disinformation online.

The proposals follow an EC-commissioned report last month, by its High-Level Expert Group, which recommended more transparency from online platforms to help combat the spread of false information online — and also called for urgent investment in media and information literacy education, and strategies to empower journalists and foster a diverse and sustainable news media ecosystem.

Bots, fake accounts, political ads, filter bubbles

In an announcement on Friday the Commission said it wants platforms to establish “clear marking systems and rules for bots” in order to ensure “their activities cannot be confused with human interactions”. It does not go into a greater level of detail on how that might be achieved. Clearly it’s intending platforms to have to come up with relevant methodologies.

Identifying bots is not an exact science — as academics conducting research into how information spreads online could tell you. The current tools that exist for trying to spot bots typically involve rating accounts across a range of criteria to give a score of how likely an account is to be algorithmically controlled vs human controlled. But platforms do at least have a perfect view into their own systems, whereas academics have had to rely on the variable level of access platforms are willing to give them.

Another factor here is that given the sophisticated nature of some online disinformation campaigns — the state-sponsored and heavily resourced efforts by Kremlin backed entities such as Russia’s Internet Research Agency, for example — if the focus ends up being algorithmically controlled bots vs IDing bots that might have human agents helping or controlling them, plenty of more insidious disinformation agents could easily slip through the cracks.

That said, other measures in the EC’s proposals for platforms include stepping up their existing efforts to shutter fake accounts and being able to demonstrate the “effectiveness” of such efforts — so greater transparency around how fake accounts are identified and the proportion being removed (which could help surface more sophisticated human-controlled bot activity on platforms too).

Another measure from the package: The EC says it wants to see “significantly” improved scrutiny of ad placements — with a focus on trying to reduce revenue opportunities for disinformation purveyors.

Restricting targeting options for political advertising is another component. “Ensure transparency about sponsored content relating to electoral and policy-making processes,” is one of the listed objectives on its fact sheet — and ad transparency is something Facebook has said it’s prioritizing since revelations about the extent of Kremlin disinformation on its platform during the 2016 US presidential election, with expanded tools due this summer.

The Commission also says generally that it wants platforms to provide “greater clarity about the functioning of algorithms” and enable third-party verification — though there’s no greater level of detail being provided at this point to indicate how much algorithmic accountability it’s after from platforms.

We’ve asked for more on its thinking here and will update this story with any response. It looks to be seeking to test the water to see how much of the workings of platforms’ algorithmic blackboxes can be coaxed from them voluntarily — such as via measures targeting bots and fake accounts — in an attempt to stave off formal and more fulsome regulations down the line.

Filter bubbles also appear to be informing the Commission’s thinking, as it says it wants platforms to make it easier for users to “discover and access different news sources representing alternative viewpoints” — via tools that let users customize and interact with the online experience to “facilitate content discovery and access to different news sources”.

Though another stated objective is for platforms to “improve access to trustworthy information” — so there are questions about how those two aims can be balanced, i.e. without efforts towards one undermining the other. 

On trustworthiness, the EC says it wants platforms to help users assess whether content is reliable using “indicators of the trustworthiness of content sources”, as well as by providing “easily accessible tools to report disinformation”.

In one of several steps Facebook has taken since 2016 to try to tackle the problem of fake content being spread on its platform the company experimented with putting ‘disputed’ labels or red flags on potentially untrustworthy information. However the company discontinued this in December after research suggested negative labels could entrench deeply held beliefs, rather than helping to debunk fake stories.

Instead it started showing related stories — containing content it had verified as coming from news outlets its network of fact checkers considered reputable — as an alternative way to debunk potential fakes.

The Commission’s approach looks to be aligning with Facebook’s rethought approach — with the subjective question of how to make judgements on what is (and therefore what isn’t) a trustworthy source likely being handed off to third parties, given that another strand of the code is focused on “enabling fact-checkers, researchers and public authorities to continuously monitor online disinformation”.

Since 2016 Facebook has been leaning heavily on a network of local third party ‘partner’ fact-checkers to help identify and mitigate the spread of fakes in different markets — including checkers for written content and also photos and videos, the latter in an effort to combat fake memes before they have a chance to go viral and skew perceptions.

In parallel Google has also been working with external fact checkers, such as on initiatives such as highlighting fact-checked articles in Google News and search. 

The Commission clearly approves of the companies reaching out to a wider network of third party experts. But it is also encouraging work on innovative tech-powered fixes to the complex problem of disinformation — describing AI (“subject to appropriate human oversight”) as set to play a “crucial” role for “verifying, identifying and tagging disinformation”, and pointing to blockchain as having promise for content validation.

Specifically it reckons blockchain technology could play a role by, for instance, being combined with the use of “trustworthy electronic identification, authentication and verified pseudonyms” to preserve the integrity of content and validate “information and/or its sources, enable transparency and traceability, and promote trust in news displayed on the Internet”.

It’s one of a handful of nascent technologies the executive flags as potentially useful for fighting fake news, and whose development it says it intends to support via an existing EU research funding vehicle: The Horizon 2020 Work Program.

It says it will use this program to support research activities on “tools and technologies such as artificial intelligence and blockchain that can contribute to a better online space, increasing cybersecurity and trust in online services”.

It also flags “cognitive algorithms that handle contextually-relevant information, including the accuracy and the quality of data sources” as a promising tech to “improve the relevance and reliability of search results”.

The Commission is giving platforms until July to develop and apply the Code of Practice — and is using the possibility that it could still draw up new laws if it feels the voluntary measures fail as a mechanism to encourage companies to put the sweat in.

It is also proposing a range of other measures to tackle the online disinformation issue — including:

  • An independent European network of fact-checkers: The Commission says this will establish “common working methods, exchange best practices, and work to achieve the broadest possible coverage of factual corrections across the EU”; and says they will be selected from the EU members of the International Fact Checking Network which it notes follows “a strict International Fact Checking NetworkCode of Principles”
  • A secure European online platform on disinformation to support the network of fact-checkers and relevant academic researchers with “cross-border data collection and analysis”, as well as benefitting from access to EU-wide data
  • Enhancing media literacy: On this it says a higher level of media literacy will “help Europeans to identify online disinformation and approach online content with a critical eye”. So it says it will encourage fact-checkers and civil society organisations to provide educational material to schools and educators, and organise a European Week of Media Literacy
  • Support for Member States in ensuring the resilience of elections against what it dubs “increasingly complex cyber threats” including online disinformation and cyber attacks. Stated measures here include encouraging national authorities to identify best practices for the identification, mitigation and management of risks in time for the 2019 European Parliament elections. It also notes work by a Cooperation Group, saying “Member States have started to map existing European initiatives on cybersecurity of network and information systems used for electoral processes, with the aim of developing voluntary guidance” by the end of the year.  It also says it will also organise a high-level conference with Member States on cyber-enabled threats to elections in late 2018
  • Promotion of voluntary online identification systems with the stated aim of improving the “traceability and identification of suppliers of information” and promoting “more trust and reliability in online interactions and in information and its sources”. This includes support for related research activities in technologies such as blockchain, as noted above. The Commission also says it will “explore the feasibility of setting up voluntary systems to allow greater accountability based on electronic identification and authentication scheme” — as a measure to tackle fake accounts. “Together with others actions aimed at improving traceability online (improving the functioning, availability and accuracy of information on IP and domain names in the WHOIS system and promoting the uptake of the IPv6 protocol), this would also contribute to limiting cyberattacks,” it adds
  • Support for quality and diversified information: The Commission is calling on Member States to scale up their support of quality journalism to ensure a pluralistic, diverse and sustainable media environment. The Commission says it will launch a call for proposals in 2018 for “the production and dissemination of quality news content on EU affairs through data-driven news media”

It says it will aim to co-ordinate its strategic comms policy to try to counter “false narratives about Europe” — which makes you wonder whether debunking the output of certain UK tabloid newspapers might fall under that new EC strategy — and also more broadly to tackle disinformation “within and outside the EU”.

Commenting on the proposals in a statement, the Commission’s VP for the Digital Single Market, Andrus Ansip, said: Disinformation is not new as an instrument of political influence. New technologies, especially digital, have expanded its reach via the online environment to undermine our democracy and society. Since online trust is easy to break but difficult to rebuild, industry needs to work together with us on this issue. Online platforms have an important role to play in fighting disinformation campaigns organised by individuals and countries who aim to threaten our democracy.”

The EC’s next steps now will be bringing the relevant parties together — including platforms, the ad industry and “major advertisers” — in a forum to work on greasing cooperation and getting them to apply themselves to what are still, at this stage, voluntary measures.

“The forum’s first output should be an EU–wide Code of Practice on Disinformation to be published by July 2018, with a view to having a measurable impact by October 2018,” says the Commission. 

The first progress report will be published in December 2018. “The report will also examine the need for further action to ensure the continuous monitoring and evaluation of the outlined actions,” it warns.

And if self-regulation fails…

In a fact sheet further fleshing out its plans, the Commission states: “Should the self-regulatory approach fail, the Commission may propose further actions, including regulatory ones targeted at a few platforms.”

And for “a few” read: Mainstream social platforms — so likely the big tech players in the social digital arena: Facebook, Google, Twitter.

For potential regulatory actions tech giants only need look to Germany, where a 2017 social media hate speech law has introduced fines of up to €50M for platforms that fail to comply with valid takedown requests within 24 hours for simple cases, for an example of the kind of scary EU-wide law that could come rushing down the pipe at them if the Commission and EU states decide its necessary to legislate.

Though justice and consumer affairs commissioner, Vera Jourova, signaled in January that her preference on hate speech at least was to continue pursuing the voluntary approach — though she also said some Member State’s ministers are open to a new EU-level law should the voluntary approach fail.

In Germany the so-called NetzDG law has faced criticism for pushing platforms towards risk aversion-based censorship of online content. And the Commission is clearly keen to avoid such charges being leveled at its proposals, stressing that if regulation were to be deemed necessary “such [regulatory] actions should in any case strictly respect freedom of expression”.

Commenting on the Code of Practice proposals, a Facebook spokesperson told us: “People want accurate information on Facebook – and that’s what we want too. We have invested in heavily in fighting false news on Facebook by disrupting the economic incentives for the spread of false news, building new products and working with third-party fact checkers.”

A Twitter spokesman declined to comment on the Commission’s proposals but flagged contributions he said the company is already making to support media literacy — including an event last week at its EMEA HQ.

At the time of writing Google had not responded to a request for comment.

Last month the Commission did further tighten the screw on platforms over terrorist content specifically —  saying it wants them to get this taken down within an hour of a report as a general rule. Though it still hasn’t taken the step to cement that hour ‘rule’ into legislation, also preferring to see how much action it can voluntarily squeeze out of platforms via a self-regulation route.

 



from Social – TechCrunch https://ift.tt/2vYxesQ

Logan Paul ends daily YouTube vlog series

The popular video creator says he will no longer post daily updates after 536 uploads.

from BBC News - Technology https://ift.tt/2r9LoSo

China shuts down Player Unknown cheat code gang

The cheats helped players survive longer in the hugely popular survival shooter game.

from BBC News - Technology https://ift.tt/2r9msdC

Facebook 'downvote' button in new test

Facebook says the tool, now being trialled in New Zealand, is not a "dislike" key.

from BBC News - Technology https://ift.tt/2FslrT6

Legal row over who owns France.com domain

A man who has run the France.com website since 1994 is suing after it was given to the French government.

from BBC News - Technology https://ift.tt/2HDvEhr

Tesla driver banned for M1 autopilot seat-switch

He said he was the "unlucky one who got caught" after being seen in the passenger seat on the M1.

from BBC News - Technology https://ift.tt/2Foam5D

Regain control

The new rules which might help stop private information being shared without our knowledge.

from BBC News - Technology https://ift.tt/2JG9YC1

Body cameras deter attacks and abuse at Welsh hospitals

Security staff at five of Wales' health boards now wear recording devices to try to deter violence.

from BBC News - Technology https://ift.tt/2JFtdeH

Porn block

Soon you'll have to prove you're 18 years old if you want to watch pornography online. Here is all you need to know.

from BBC News - Technology https://ift.tt/2HU3TV3

T-Mobile agrees $26bn mega-merger with Sprint

The US telecoms firms could win more customers through the deal, as long as regulators approve it.

from BBC News - Technology https://ift.tt/2w5RWr1

Sentinel tracks ships' dirty emissions from orbit

The EU's new satellite pollution-tracker will be a powerful tool to monitor vessels' emissions.

from BBC News - Technology https://ift.tt/2HQ43gv

God of War: Games no longer where actors careers 'go to die'

Christopher Judge, who plays Kratos in the new God of War, talks about the jump from the silver screen to games.

from BBC News - Technology https://ift.tt/2w793Zx

'Hacker's paradise'

How cyber security firm Darktrace was set up by former members of the UK security services and maths professors.

from BBC News - Technology https://ift.tt/2Kpbz0j

Sunday, April 29, 2018

Microsoft Finally Ditches Its Gun Emoji, Following Google, Facebook And Apple

We all know very well that in 2015, the tech giant Apple was pressured by a group in favour of the control of firearms and decided to change its emoji of pistol by a version of toy. Yes, now according to the latest reports, the tech giant Microsoft finally ditches its pistol emoji and swaps it for a water gun, following in the footsteps of Google, Facebook and Apple. Microsoft Finally Ditches Its Gun Emoji, Following Google, Facebook And Apple In 2015, the tech giant Apple was pressured by a group in favour of the control of firearms and decided

The post Microsoft Finally Ditches Its Gun Emoji, Following Google, Facebook And Apple appeared first on Tech Viral.



from Tech Viral https://ift.tt/2KjmY1n

This New Google Music Service Will Bring The End Of Google Play Music

We all know very well that the tech giant Google has a habit of trying to meet the application needs of its Android ecosystem. However, now according to the latest reports, the tech giant Google to launch a new music service which will simply bring the end of Google Play Music. This New Google Music Service Will Bring The End Of Google Play Music The YouTube Remix, New attempt of the tech giant Google to tumble a streaming music service, to be presented later this year. And its release may decree the end of another company platform: Google Play Music.

The post This New Google Music Service Will Bring The End Of Google Play Music appeared first on Tech Viral.



from Tech Viral https://ift.tt/2Fs3NPo

Top 15+ Best Screen Recording Software For Windows

If you are looking for best screen recording software for Windows, then you are in right spot. Today we are going to explore five best screen recording software that will let you record your computer screen efficiently. For Windows users, we are here with Best Screen Recording Software For Windows. As screen recording is the excellent way to capture anyone’s activity on a computer and using these tools, you can capture each and everything that you did on a computer. And in this article, we will be reviewing some cool apps for Windows that will help you record screen efficiently. You

The post Top 15+ Best Screen Recording Software For Windows appeared first on Tech Viral.



from Tech Viral https://ift.tt/2vd8QBa

Top 15+ Best Android Apps For Downloading Free Music 2018

Here are the top best Android apps for downloading free music: For music lovers, we are going to show you ten best music downloaders for your Android smartphone. As we all know music always makes our environment better, so we decided to review some best music apps that will let you listen and download music for free. We all admire to play music on our smartphone because music always entertains us. Whether we are in a party, with our friends or family, music always makes the environment better. Now we always want to stay updated in the music world and

The post Top 15+ Best Android Apps For Downloading Free Music 2018 appeared first on Tech Viral.



from Tech Viral https://ift.tt/2pBtkzl

Web Browser’s Incognito Mode Isn’t As Private As You Think

Incognito mode is just for private browsing. If you choose to use Incognito mode, then your web browser won’t be saving any history, cookies or passwords. It can be trackable. If you are browsing in incognito and if you sign into one of Google apps or any third party services then Google or that third-party service provider can sniff your doings. Web Browser’s Incognito Mode Isn’t As Private As You Think All major browsers have an incognito browsing mode that allows you to limit the volume of saved data. But will your navigation be totally private? Let’s find out. Whether

The post Web Browser’s Incognito Mode Isn’t As Private As You Think appeared first on Tech Viral.



from Tech Viral https://ift.tt/2r9CDrd

Meet The New Browser From Opera

We all know very well that Opera is known for creating alternative browsers that bring freshness to the field. It does many experiments to ensure the best and this time it got right back. According to the latest reports, recently Opera just launched a new web browser for Android that can be used in one hand only. Meet The New Browser From Opera Opera is known for creating alternative browsers that bring freshness to the field. It does many experiments to ensure the best and this time it got right back. Opera Touch is the latest proposal for the field

The post Meet The New Browser From Opera appeared first on Tech Viral.



from Tech Viral https://ift.tt/2r9Btw7

This ‘Master Key’ Can Hack Any Hotel Room

We all know very well that technology touches all edges of society, even on the doorstep of a hotel room. Yes, according to the latest reports, recently, a well-known security researcher has developed a master key that can easily hack any hotel room. This ‘Master Key’ Can Hack Any Hotel Room Technology touches all edges of society, even on the doorstep of a hotel room. Where there is technology that hackers and derivatives will be able to use their knowledge in the service of evil, injury and more. If you think that your hotel room is safe, it may not

The post This ‘Master Key’ Can Hack Any Hotel Room appeared first on Tech Viral.



from Tech Viral https://ift.tt/2KoQlj0

iTunes For Windows 10 Hits Microsoft Store, Download Now

We all know very well that when the tech giant Microsoft announced Windows 10 S introduced several apps that would pop up in its Windows application store. However, now according to the latest reports, recently, the tech giant Apple’s iTunes for Windows 10 just hit the Microsoft Store. iTunes For Windows 10 Hits Microsoft Store, Download Now When the tech giant Microsoft announced Windows 10 S introduced several apps that would pop up in its Windows application store. Among them, and one of the most interesting was iTunes. Without any exact time and with multiple threats, iTunes was slow to

The post iTunes For Windows 10 Hits Microsoft Store, Download Now appeared first on Tech Viral.



from Tech Viral https://ift.tt/2FqLYAm

Games no longer where careers 'go to die'

Christopher Judge the actor who plays Kratos in the new God of War speaks to Newsbeat about the jump from the silver screen to games.

from BBC News - Technology https://ift.tt/2JCMXQk

Fortnite: 13-year-old is game's youngest professional player

Kyle Jackson from Sidcup in Kent is set to compete for cash prizes in events all over the world.

from BBC News - Technology https://ift.tt/2JAo33K

Saturday, April 28, 2018

Fake five-star reviews being bought and sold online

A trade in false online reviews relied upon by millions is identified by a BBC investigation.

from BBC News - Technology https://ift.tt/2HzKidI

Facebook’s dark ads problem is systemic

Facebook’s admission to the UK parliament this week that it had unearthed unquantified thousands of dark fake ads after investigating fakes bearing the face and name of well-known consumer advice personality, Martin Lewis, underscores the massive challenge for its platform on this front. Lewis is suing the company for defamation over its failure to stop bogus ads besmirching his reputation with their associated scams.

Lewis decided to file his campaigning lawsuit after reporting 50 fake ads himself, having been alerted to the scale of the problem by consumers contacting him to ask if the ads were genuine or not. But the revelation that there were in fact associated “thousands” of fake ads being run on Facebook as a clickdriver for fraud shows the company needs to change its entire system, he has now argued.

In a response statement after Facebook’s CTO Mike Schroepfer revealed the new data-point to the DCMS committee, Lewis wrote: “It is creepy to hear that there have been 1,000s of adverts. This makes a farce of Facebook’s suggestion earlier this week that to get it to take down fake ads I have to report them to it.”

“Facebook allows advertisers to use what is called ‘dark ads’. This means they are targeted only at set individuals and are not shown in a time line. That means I have no way of knowing about them. I never get to hear about them. So how on earth could I report them? It’s not my job to police Facebook. It is Facebook’s job — it is the one being paid to publish scams.”

As Schroepfer told it to the committee, Facebook had removed the additional “thousands” of ads “proactively” — but as Lewis points out that action is essentially irrelevant given the problem is systemic. “A one off cleansing, only of ads with my name in, isn’t good enough. It needs to change its whole system,” he wrote.

In a statement on the case, a Facebook spokesperson told us: “We have also offered to meet Martin Lewis in person to discuss the issues he’s experienced, explain the actions we have taken already and discuss how we could help stop more bad ads from being placed.”

The committee raised various ‘dark ads’-related issues with Schroepfer — asking how, as with the Lewis example, a person could complain about an advert they literally can’t see?

The Facebook CTO avoided a direct answer but essentially his reply boiled down to: People can’t do anything about this right now; they have to wait until June when Facebook will be rolling out the ad transparency measures it trailed earlier this month — then he claimed: “You will basically be able to see every running ad on the platform.”

But there’s a very big different between being able to technically see every ad running on the platform — and literally being able to see every ad running on the platform. (And, well, pity the pair of eyeballs that were condemned to that Dantean fate… )

In its PR about the new tools Facebook says a new feature — called “view ads” — will let users see the ads a Facebook Page is running, even if that Page’s ads haven’t appeared in an individual’s News Feed. So that’s one minor concession. However, while ‘view ads’ will apply to every advertiser Page on Facebook, a Facebook user will still have to know about the Page, navigate to it and click to ‘view ads’.

What Facebook is not launching is a public, searchable archive of all ads on its platform. It’s only doing that for a sub-set of ads — specially those labeled “Political Ad”.

Clearly the Martin Lewis fakes wouldn’t fit into that category. So Lewis won’t be able to run searches against his name or face in future to try to identify new dark fake Facebook ads that are trying to trick consumers into scams by misappropriating his brand. Instead, he’d have to employ a massive team of people to click “view ads” on every advertiser Page on Facebook — and do so continuously, so long as his brand lasts — to try to stay ahead of the scammers.

So unless Facebook radically expands the ad transparency tools it has announced thus far it’s really not offering any kind of fix for the dark fake ads problem at all. Not for Lewis. Nor indeed for any other personality or brand that’s being quietly misused in the hidden bulk of scams we can only guess are passing across its platform.

Kremlin-backed political disinformation scams are really just the tip of the iceberg here. But even in that narrow instance Facebook estimated there had been 80,000 pieces of fake content targeted at just one election.

What’s clear is that without regulatory invention the burden of proactive policing of dark ads and fake content on Facebook will keep falling on users — who will now have to actively sift through Facebook Pages to see what ads they’re running and try to figure out if they look legit.

Yet Facebook has 2BN+ users globally. The sheer number of Pages and advertisers on its platform renders “view ads” an almost entirely meaningless addition, especially as cyberscammers and malicious actors are also going to be experts at setting up new accounts to further their scams — moving on to the next batch of burner accounts after they’ve netted each fresh catch of unsuspecting victims.

The committee asked Schroepfer whether Facebook retains money from advertisers it ejects from its platform for running ‘bad ads’ — i.e. after finding they were running an ad its terms prohibit. He said he wasn’t sure, and promised to follow up with an answer. Which rather suggests it doesn’t have an actual policy. Mostly it’s happy to collect your ad spend.

“I do think we are trying to catch all of these things pro-actively. I won’t want the onus to be put on people to go find these things,” he also said, which is essentially a twisted way of saying the exact opposite: That the onus remains on users — and Facebook is simply hoping to have a technical capacity that can accurately review content at scale at some undefined moment in the future.

“We think of people reporting things, we are trying to get to a mode over time — particularly with technical systems — that can catch this stuff up front,” he added. “We want to get to a mode where people reporting bad content of any kind is the sort of defense of last resort and that the vast majority of this stuff is caught up front by automated systems. So that’s the future that I am personally spending my time trying to get us to.”

Trying, want to, future… aka zero guarantees that the parallel universe he was describing will ever align with the reality of how Facebook’s business actually operates — right here, right now.

In truth this kind of contextual AI content review is a very hard problem, as Facebook CEO Mark Zuckerberg has himself admitted. And it’s by no means certain the company can develop robust systems to properly police this kind of stuff. Certainly not without hiring orders of magnitude more human reviewers than it’s currently committed to doing. It would need to employ literally millions more humans to manually check all the nuanced things AIs simply won’t be able to figure out.

Or else it would need to radically revise its processes — as Lewis has suggested  — to make them a whole lot more conservative than they currently are — by, for example, requiring much more careful and thorough scrutiny of (and even pre-vetting) certain classes of high risk adverts. So yes, by engineering in friction.

In the meanwhile, as Facebook continues its lucrative business as usual — raking in huge earnings thanks to its ad platform (in its Q1 earnings this week it reported a whopping $11.97BN in revenue) — Internet users are left performing unpaid moderation for a massively wealthy for-profit business while simultaneously being subject to the bogus and fraudulent content its platform is also distributing at scale.

There’s a very clear and very major asymmetry here — and one European lawmakers at least look increasingly wise to.

Facebook frequently falling back on pointing to its massive size as the justification for why it keeps failing on so many types of issues — be it consumer safety or indeed data protection compliance — may even have interesting competition-related implications, as some have suggested.

On the technical front, Schroepfer was asked specifically by the committee why Facebook doesn’t use the facial recognition technology it has already developed — which it applies across its user-base for features such as automatic photo tagging — to block ads that are using a person’s face without their consent.

“We are investigating ways to do that,” he replied. “It is challenging to do technically at scale. And it is one of the things I am hopeful for in the future that would catch more of these things automatically. Usually what we end up doing is a series of different features would figure out that these ads are bad. It’s not just the picture, it’s the wording. What can often catch classes — what we’ll do is catch classes of ads and say ‘we’re pretty sure this is a financial ad, and maybe financial ads we should take a little bit more scrutiny on up front because there is the risk for fraud’.

“This is why we took a hard look at the hype going around cryptocurrencies. And decided that — when we started looking at the ads being run there, the vast majority of those were not good ads. And so we just banned the entire category.”

That response is also interesting, given that many of the fake ads Lewis is complaining about (which incidentally often point to offsite crypto scams) — and indeed which he has been complaining about for months at this point — fall into a financial category.

If Facebook can easily identify classes of ads using its current AI content review systems why hasn’t it been able to proactively catch the thousands of dodgy fake ads bearing Lewis’ image?

Why did it require Lewis to make a full 50 reports — and have to complain to it for months — before Facebook did some ‘proactive’ investigating of its own?

And why isn’t it proposing to radically tighten the moderation of financial ads, period?

The risks to individual users here are stark and clear. (Lewis writes, for example, that “one lady had over £100,000 taken from her”.)

Again it comes back to the company simply not wanting to slow down its revenue engines, nor take the financial hit and business burden of employing enough humans to review all the free content it’s happy to monetize. It also doesn’t want to be regulated by governments — which is why it’s rushing out its own set of self-crafted ‘transparency’ tools, rather than waiting for rules to be imposed on it.

Committee chair Damian Collins concluded one round of dark ads questions for the Facebook CTO by remarking that his overarching concern about the company’s approach is that “a lot of the tools seem to work for the advertiser more than they do for the consumer”. And, really, it’s hard to argue with that assessment.

This is not just an advertising problem either. All sorts of other issues that Facebook had been blasted for not doing enough about can also be explained as a result of inadequate content review — from hate speech, to child protection issues, to people trafficking, to ethnic violence in Myanmar, which the UN has accused its platform of exacerbating (the committee questioned Schroepfer on that too, and he lamented that it is “awful”).

In the Lewis fake ads case, this type of ‘bad ad’ — as Facebook would call it — should really be the most trivial type of content review problem for the company to fix because it’s an exceeding narrow issue, involving a single named individual. (Though that might also explain why Facebook hasn’t bothered; albeit having ‘total willingness to trash individual reputations’ as your business M.O. doesn’t make for a nice PR message to sell.)

And of course it goes without saying there are far more — and far more murky and obscure — uses of dark ads that remain to be fully dragged into the light where their impact on people, societies and civilized processes can be scrutinized and better understood. (The difficulty of defining what is a “political ad” is another lurking loophole in the credibility of Facebook’s self-serving plan to ‘clean up’ its ad platform.)

Schroepfer was asked by one committee member about the use of dark ads to try to suppress African American votes in the US elections, for example, but he just reframed the question to avoid answering it — saying instead that he agrees with the principle of “transparency across all advertising”, before repeating the PR line about tools coming in June. Shame those “transparency” tools look so well designed to ensure Facebook’s platform remains as shadily opaque as possible.

Whatever the role of US targeted Facebook dark ads in African American voter suppression, Schroepfer wasn’t at all comfortable talking about it — and Facebook isn’t publicly saying. Though the CTO confirmed to the committee that Facebook employs people to work with advertisers, including political advertisers, to “help them to use our ad systems to best effect”.

“So if a political campaign were using dark advertising your people helping support their use of Facebook would be advising them on how to use dark advertising,” astutely observed one committee member. “So if somebody wanted to reach specific audiences with a specific message but didn’t want another audience to [view] that message because it would be counterproductive, your people who are supporting these campaigns by these users spending money would be advising how to do that wouldn’t they?”

“Yeah,” confirmed Schroepfer, before immediately pointing to Facebook’s ad policy — claiming “hateful, divisive ads are not allowed on the platform”. But of course bad actors will simply ignore your policy unless it’s actively enforced.

“We don’t want divisive ads on the platform. This is not good for us in the long run,” he added, without shedding so much as a chink more light on any of the bad things Facebook-distributed dark ads might have already done.

At one point he even claimed not to know what the term ‘dark advertising’ meant — leading the committee member to read out the definition from Google, before noting drily: “I’m sure you know that.”

Pressed again on why Facebook can’t use facial recognition at scale to at least fix the Lewis fake ads — given it’s already using the tech elsewhere on its platform — Schroepfer played down the value of the tech for these types of security use-cases, saying: “The larger the search space you use, so if you’re looking across a large set of people the more likely you’ll have a false positive — that two people tend to look the same — and you won’t be able to make automated decisions that said this is for sure this person.

“This is why I say that it may be one of the tools but I think usually what ends up happening is it’s a portfolio of tools — so maybe it’s something about the image, maybe the fact that it’s got ‘Lewis’ in the name, maybe the fact that it’s a financial ad, wording that is consistent with a financial ads. We tend to use a basket of features in order to detect these things.”

That’s also an interesting response since it was a security use-case that Facebook selected as the first of just two sample ‘benefits’ it presents to users in Europe ahead of the choice it is required (under EU law) to offer people on whether to switch facial recognition technology on or keep it turned off — claiming it “allows us to help protect you from a stranger using your photo to impersonate you”…

Yet judging by its own CTO’s analysis, Facebook’s face recognition tech would actually be pretty useless for identifying “strangers” misusing your photographs — at least without being combined with a “basket” of other unmentioned (and doubtless equally privacy-hostile) technical measures.

So this is yet another example of a manipulative message being put out by a company that is also the controller of a platform that enables all sorts of unknown third parties to experiment with and distribute their own forms of manipulative messaging at vast scale, thanks to a system designed to facilitate — nay, embrace — dark advertising.

What face recognition technology is genuinely useful for is Facebook’s own business. Because it gives the company yet another personal signal to triangulate and better understand who people on its platform are really friends with — which in turn fleshes out the user-profiles behind the eyeballs that Facebook uses to fuel its ad targeting, money-minting engines.

For profiteering use-cases the company rarely sits on its hands when it comes to engineering “challenges”. Hence its erstwhile motto to ‘move fast and break things’ — which has now, of course, morphed uncomfortably into Zuckerberg’s 2018 mission to ‘fix the platform’; thanks, in no small part, to the existential threat posed by dark ads which, up until very recently, Facebook wasn’t saying anything about at all. Except to claim it was “crazy” to think they might have any influence.

And now, despite major scandals and political pressure, Facebook is still showing zero appetite to “fix” its platform — because the issues being thrown into sharp relief are actually there by design; this is how Facebook’s business functions.

“We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools. If we’re successful this year then we’ll end 2018 on a much better trajectory,” wrote Zuckerberg in January, underlining how much easier it is to break stuff than put things back together — or even just make a convincing show of fiddling with sticking plaster.



from Social – TechCrunch https://ift.tt/2vUlg3j

Is Google Planning To Kill Android?

Recently, XDA Developers’ Mishaal Rahman spotted Fuchsia OS in AOSP’s ART (Android runtime) branch. Well, these findings are something which can change the future of Android operating system because this feature, theoretically, means that Android apps will be compatible with Fuchsia OS. Is Google Planning To Kill Android? If you remember, in the year 2016, the search giant Google teased its mysterious Fuchsia Operating system. Well, since then the operating system has been found making headlines. Several rumors claim that Google is looking to use Fuchsia in place of its Android and Chrome OS operating system. Recently, XDA Developers’ Mishaal

The post Is Google Planning To Kill Android? appeared first on Tech Viral.



from Tech Viral https://ift.tt/2r6S9nQ

Top 25 Must-Have Productivity Apps For Your Android

Today, we are going to share Top 25 Best Productivity Apps For Your Android Device. Go through the post to discover the list. Those days are gone when the phone was only used for voice communication. This is a generation where our smartphones are more than a powerful personal computer that we carry in our pocket. You can get various type of tools for every particular use. There are many apps available in Google play store regarding productivity. Here we are going to share 10 Android apps which are the best among all other functionality-expanding productivity tools available on Play store. Let’s

The post Top 25 Must-Have Productivity Apps For Your Android appeared first on Tech Viral.



from Tech Viral https://ift.tt/2HDshqX

This Ransomware Will Lock Your Files Unless You Play This Game!

The recent reports from Kotaku claim that the PUBG Ransomware restricts users from using the computer as soon as it infects the system and displays a ransom screen. However, the ransom screen could be a paradise for Gamers because it just asks users to play PUBG Game and have fun. This Ransomware Will Lock Your Files Unless You Play This Game! In recent years we are seeing how ransomware, a type of malware most commonly used by hackers or hackers is becoming an easy medium to loot users. Not only that even nowadays it is used as much as that

The post This Ransomware Will Lock Your Files Unless You Play This Game! appeared first on Tech Viral.



from Tech Viral https://ift.tt/2r7inHD

How To Improve Your Aim in PC Games

Let’s have a look at the method To Improve Your Aim in PC Games that everyone wants to do and that is possible with the best techniques that I’m going to discuss here as there is no magic that can directly make you aim perfectly on these games so you need to know some of the basic tricks that will be helpful. So have a look at the complete tutorial below to proceed. Aiming is essential to keep the focus on the scene in any game on the desktop. You control the aiming through the mouse pointer and you know

The post How To Improve Your Aim in PC Games appeared first on Tech Viral.



from Tech Viral https://ift.tt/2vSsZ27

Top 15+ Best Free Photoshop Alternatives 2018

It is well known that the king of the photo retouching is one and only ‘Adobe Photoshop’. As it is a program for Windows and Mac versions and widely used by designers and fans. However, the price of Adobe Photoshop is not within the reach of all but the good news is that there are good alternatives. Top 15+ Best Free Photoshop Alternatives 2018 Photoshop is one of the reference names in the segment of image editing software. However, the price of Adobe Photoshop is not within reach of all, but the good news is that there are good alternatives.

The post Top 15+ Best Free Photoshop Alternatives 2018 appeared first on Tech Viral.



from Tech Viral https://ift.tt/2je5Y09

Friday, April 27, 2018

Facebook shrinks fake news after warnings backfire

Tell someone not to do something and sometimes they just want to do it more. That’s what happened when Facebook put red flags on debunked fake news. Users who wanted to believe the false stories had their fevers ignited and they actually shared the hoaxes more. That led Facebook to ditch the incendiary red flags in favor of showing Related Articles with more level-headed perspectives from trusted news sources.

But now it’s got two more tactics to reduce the spread of misinformation, which Facebook detailed at its Fighting Abuse @Scale event in San Francisco. Facebook’s director of News Feed integrity Michael McNally and data scientist Lauren Bose held a talk discussing all the ways it intervenes. The company is trying to walk a fine line between censorship and sensibility.

These red warning labels actually backfired and made some users more likely to share, so Facebook switched to showing Related Articles

First, rather than call more attention to fake news, Facebook wants to make it easier to miss these stories while scrolling. When Facebook’s third-party fact-checkers verify an article is inaccurate, Facebook will shrink the size of the link post in the News Feed. “We reduce the visual prominence of feed stories that are fact-checked false,” a Facebook spokesperson confirmed to me.

As you can see below in the image on the left, confirmed-to-be-false news stories on mobile show up with their headline and image rolled into a single smaller row of space. Below, a Related Articles box shows “Fact-Checker”-labeled stories debunking the original link. Meanwhile on the right, a real news article’s image appears about 10 times larger, and its headline gets its own space.

 

Second, Facebook is now using machine learning to look at newly published articles and scan them for signs of falsehood. Combined with other signals like user reports, Facebook can use high falsehood prediction scores from the machine learning systems to prioritize articles in its queue for fact-checkers. That way, the fact-checkers can spend their time reviewing articles that are already qualified to probably be wrong.

“We use machine learning to help predict things that might be more likely to be false news, to help prioritize material we send to fact-checkers (given the large volume of potential material),” a spokesperson from Facebook confirmed. The social network now works with 20 fact-checkers in several countries around the world, but it’s still trying to find more to partner with. In the meantime, the machine learning will ensure their time is used efficiently.

Bose and McNally also walked the audience through Facebook’s “ecosystem” approach that fights fake news at every step of its development:

  • Account Creation – If accounts are created using fake identities or networks of bad actors, they’re removed.
  • Asset Creation – Facebook looks for similarities to shut down clusters of fraudulently created Pages and inhibit the domains they’re connected to.
  • Ad Policies – Malicious Pages and domains that exhibit signatures of wrong use lose the ability to buy or host ads, which deters them from growing their audience or monetizing it.
  • False Content Creation – Facebook applies machine learning to text and images to find patterns that indicate risk.
  • Distribution – To limit the spread of false news, Facebook works with fact-checkers. If they debunk an article, its size shrinks, Related Articles are appended and Facebook downranks the stories in News Feed.

Together, by chipping away at each phase, Facebook says it can reduce the spread of a false news story by 80 percent. Facebook needs to prove it has a handle on false news before more big elections in the U.S. and around the world arrive. There’s a lot of work to do, but Facebook has committed to hiring enough engineers and content moderators to attack the problem. And with conferences like Fighting Abuse @Scale, it can share its best practices with other tech companies so Silicon Valley can put up a united front against election interference.



from Social – TechCrunch https://ift.tt/2r6F3Hl

Joyoshare Media Cutter for Mac: A Convenient Media Cutter For Mac

Joyoshare Media Cutter for Mac is one of the best video edition tools you can have. The app features all advanced tool that requires advanced video editing. The another best thing about Joyoshare Media Cutter is that it comes with a trial version. It’s basically a premium tool, but it’s giving away the trial version to try and buy the license later. Joyoshare Media Cutter for Mac: A Convenient Media Cutter For Mac If we look around, we will find that YouTube content creation attained a new height where most of the content creators are busy making YouTube videos. We

The post Joyoshare Media Cutter for Mac: A Convenient Media Cutter For Mac appeared first on Tech Viral.



from Tech Viral https://ift.tt/2Jwo9JE

30 Best Tips And Tricks For Rooted Android Device

As we know that rooting allows Android users to get the complete control and authority within Android system. Therefore, if you have a rooted Android device, then have a look at the best tricks and tips that you can try out in your Android device to change overall experience. As we all know, Android rooting is the process of allowing a user to get the absolute control and authority within Android’s system. When you root an Android smartphone, it permits you to act as the administrator of the Android phone. Till now we discussed lots of cool rooted Android tricks.

The post 30 Best Tips And Tricks For Rooted Android Device appeared first on Tech Viral.



from Tech Viral https://ift.tt/2lH6LK4

Top 30 Best Root Apps For Your Android Device 2018

Here are Top best root apps 2018 for Android: If you are looking for the best root apps 2018, then this could be the right place. Below we have listed 25 best apps for rooted Android device. We have listed the Top Root Apps 2018. The rooted Android phone has many advantages because when you root an Android smartphone it permits you to act as the administrator of the Android phone. But, how can you take over the good control of your rooted device? The solution is to use best root apps for Android, so today I will be talking about some best root

The post Top 30 Best Root Apps For Your Android Device 2018 appeared first on Tech Viral.



from Tech Viral https://ift.tt/2p4JrnT

Deadline to amend UK surveillance laws

A High Court judgement calling for changes came out of a legal challenge mounted by rights group Liberty.

from BBC News - Technology https://ift.tt/2vN2KKe

Tech Tent: the technology of pleasure

Meet the British firm making its mark in the sex tech sector.

from BBC News - Technology https://ift.tt/2r25A9E

Facebook’s Messenger Kids’ app gains a ‘sleep mode’

Facebook’s Messenger Kids, the social network’s new chat app for the under-13 crowd, has been designed to give parents more control over their kids’ contact list. Today, the app is gaining a new feature, “sleep mode,” aimed at giving parents the ability to turn the app off at designated times. The idea is that parents and children will talk about when it’s appropriate to send messages to friends and family, and when it’s time for other activities – like homework or bedtime, for example.

The app, which launched last December, has not been without controversy.

Some see it as a gateway drug for Facebook proper. Others whine that “kids should be playing outside!” – as if kids don’t engage in all sorts of activities, including device usage, at times. And of course, amid Facebook’s numerous scandals around data privacy, it’s hard for some parents to fathom installing a Facebook-operated anything on their child’s phone or tablet.

But the reality, from down here in the parenting trenches, is that kids are messaging anyway and we’re desperately short on tools.

Instead of apps built with children’s and parents’ needs in mind, our kids are fumbling around on their own, making mistakes, then having their devices taken away in punishment.

The truth is, with the kids, it’s too late to put the toothpaste back in the tube. Our children are FaceTime’ing their way through Roblox playdates, they’re texting grandma and grandpa, they’re watching YouTube instead of TV, and they’re begging for too-adult apps like Snapchat – so they can play with the face filters – and Musical.ly, which has a lot of inappropriate content. (Seriously, can someone launch kid-safe versions?)

Until Messenger Kids, parents haven’t been offered any social or messaging apps built with monitoring and education in mind.

I decided to install it on my own child’s device, and I’ll admit being conflicted. But I’m using it with my child as a learning tool. We talk about how to use the app’s features, but also about appropriate messaging behavior – what to chat about, why not to send a dozen stickers at once, and how to politely end a conversation, for example.

Unlike child predator playgrounds like Kik, popularity-focused social apps like Instagram, or apps where messages simply vanish like Snapchat, Messenger Kids lets parents choose the contact list and control the experience. And, as a backup, I have a copy of the app on my own phone, so I can spot check messages sent when I’m not around.

With the new sleep mode feature, I can now turn Messenger Kids off at certain times. That means no more 8 AM video calls to the BFF. (Yes, we’ve discussed this – after the fact. Sorry, BFF’s parents.) And no more messaging right at bedtime, either.

To configure sleep mode, parents access the Messenger Kids controls from the main Facebook app, and tap on the child’s name. You can create different settings for weekdays and weekends. If the child tries to use the app during these times, they’ll instead see a message that says the app is in sleep mode and to come back later.

The control panel is also where parents can add and remove contacts, delete the child’s account, or create a new account.

Facebook suggests that parents have a discussion with kids about the boundaries they’re creating when turning on sleep mode.

[gallery ids="1629819,1629820,1629821,1629822"]

That may seem obvious, but it’s surprisingly not. I’ve actually heard some parents scoff at parental control features because they think it’s about offloading the job of parenting to technology. It’s not. It’s about using tools and parenting techniques together – whether that’s internet off times, device or app “bedtimes,” internet filtering, or whatever other mechanisms parents employ.

I understand if you can’t get past the fact that the app is from Facebook, of all places. Or you have a philosophical point of view on using Facebook products. But Facebook integration means this app could scale. In the few months it’s been live, the app has been download around 325,000 times, according to data from Sensor Tower.

Messenger Kids is a free download on iOS and Android.

 



from Social – TechCrunch https://ift.tt/2vRSdO6

Facebook drops fundraising fees for personal causes

Despite Facebook being under fire for everything pertaining to Cambridge Analytica, the company still hopes to be able to do some good. Today, Facebook is dropping its platform fees pertaining to fundraisers for personal causes.

That means Facebook is getting rid of the 4.3 percent platform fee in the u.S. and the 6.2 percent fee in Canada. Those fees were charged to cover a review process for and vetting for each fundraiser. Now, Facebook says it will absorb the costs associated with those safety and protection measures.

“We’re continuously learning and this was something we wanted to do to help people maximize the benefits,” Facebook Head of Product for Social Good Asha Sharma told me over the phone.

To be clear, there will, however, still be fees for payment processing and taxes. In the U.S. and Canada, payment processing fees are 2.6 percent plus $0.30.Facebook is also unveiling two new features for its fundraising tool.

The first is the ability for people to match donations for non-profit fundraisers and the second is the expansion of categories for personal causes. Now, in addition to raising money for things like vet bills, personal emergencies and whatnot, people can also raise money for travel (community trips or for medical needs), family-related causes (adoption, etc), religious events and volunteer supplies.

 

Facebook isn’t yet sharing specific dollar amounts raised pertaining to fundraisers, but says its tool has helped over 750,000 non-profits collect donations. All Sharma would say about personal causes is that “we’re seeing activity across all of these categories, which is why we have them.”



from Social – TechCrunch https://ift.tt/2Hz0dbW

GDPR: Your data protection questions answered

Deputy Information Commissioner answers questions from BBC Radio 5 live listeners about changes to law.

from BBC News - Technology https://ift.tt/2HwMdvy

Maverick, a social network for young women, launches with $2.7M in funding

While Bumble BFF and Hey! Vina help adult women find new friends, there isn’t a social network dedicated to young women.

But Brooke Chaffin and Catherine Connors are looking to change that with the introduction of Maverick, a social network that connects young girls with female mentors to express their creativity in a safe space.

Here’s how it works:

When a new user signs up, they can browse through various challenges set forth by Catalysts, inspiring role models selected specifically by the founders to inspire the younger demographic on the network. These challenges include things like making their own super hero, creating their own dance number or choosing a mantra.

Users, usually between the ages of 10 and 20, can post their response to a challenge via photo or a 30-second video and browse the responses of others. Interestingly, Maverick has done away with ‘likes’ and instead offers points for various types of engagement, like posting a response to a challenge, posting a comment, or giving someone a badge.

For now, there are four badges on the platform (unique, creative, unstoppable, and daring) and the company has plans to add more badges as it grows.

But Maverick isn’t just an app. The company also plans on holding a series of one-day live events across the country, highlighting young women emerging on the platform in categories like STEAM, entrepreneurship, comedy and music.

In fact, the first live event goes down tomorrow in Los Angeles, featuring “Founding Mavericks” or role models such as Chloe & Halle Baily, Brooklyn and Bailey McKnight, Daunnette Reyome, Laurie Hernandez and Ruby Karp.

For now, Maverick is a free app focused on growing its user base. But the founders see an opportunity to turn Maverick into a utility, not unlike LinkedIn, offering a subscription for premium features. And it makes sense that LinkedIn would serve as inspiration for Chaffin and Connors, as LinkedIn CEO Jeff Weiner is one of Maverick’s investors.

The company has raised $2.7 million in seed funding led by Matt Robinson of Heroic Ventures, with participatino from Susan Lyne and Nisha Dua of BBG Ventures as well as Jeff Weiner.

Here’s what co-founder and Chief Content Officer Catherine Connors had to say:

The research on girls’ social development has shown us the same thing for decades. During early adolescence, the majority of girls stop raising their hands, participating in sports and extra-curricular activities, taking risks, and stepping into leadership roles. In short, they stop believing in themselves. And it’s not because we don’t tell them that they should believe in themselves — it’s that they don’t get enough real opportunity to prove to themselves that they can.

Founders Chaffin and Connors met during their tenure at the Walt Disney Company and kept coming back to the idea of empowering girls through a new social network, and so Maverick was born.

The network is designed with a progression loop not unlike that of a game, where Mavericks can progress toward becoming a Catalyst and inspiring other young women.

The app launches out of beta today.



from Social – TechCrunch https://ift.tt/2r4OdEi

How to handle the flood of GDPR privacy updates

How best to make sense of revamped privacy terms issued ahead of the EU's data protection shake-up.

from BBC News - Technology https://ift.tt/2HvornN

IS web media targeted in EU-led attack

The EU police agency says it has "punched a big hole" in Islamic State propaganda.

from BBC News - Technology https://ift.tt/2r4wuN1

Wildlife photo competition disqualifies 'stuffed anteater' image

Wildlife Photographer of the Year excludes a winning image for featuring a taxidermy specimen.

from BBC News - Technology https://ift.tt/2r32xhd

Golden State Killer suspect traced using genealogy websites

Investigators hunting the Golden State Killer say they matched DNA to data from ancestry websites.

from BBC News - Technology https://ift.tt/2Fl5jmi

Thursday, April 26, 2018

What we learned from Facebook’s latest data misuse grilling

Facebook’s CTO Mike Schroepfer has just undergone almost five hours of often forensic and frequently awkward questions from members of a UK parliament committee that’s investigating online disinformation, and whose members have been further fired up by misinformation they claim Facebook gave it.

The veteran senior exec, who’s clocked up a decade at the company, also as its VP of engineering, is the latest stand-in for CEO Mark Zuckerberg who keeps eschewing repeat requests to appear.

The DCMS committee’s enquiry began last year as a probe into ‘fake news’ but has snowballed in scope as the scale of concern around political disinformation has also mounted — including, most recently, fresh information being exposed by journalists about the scale of the misuse of Facebook data for political targeting purposes.

During today’s session committee chair Damian Collins again made a direct appeal for Zuckerberg to testify, pausing the flow of questions momentarily to cite news reports suggesting the Facebook founder has agreed to fly to Brussels to testify before European Union lawmakers in relation to the Cambridge Analytica Facebook data misuse scandal.

“We’ll certainly be renewing our request for him to give evidence,” said Collins. “We still do need the opportunity to put some of these questions to him.”

Committee members displayed visible outrage during the session, accusing Facebook of concealing the truth or at very least concealing evidence from it at a prior hearing that took place in Washington in February — when the company sent its UK head of policy, Simon Milner, and its head of global policy management, Monika Bickert, to field questions.

During questioning Milner and Bickert failed to inform the committee about a legal agreement Facebook had made with Cambridge Analytica in December 2015 — after the company had learned (via an earlier Guardian article) that Facebook user data had been passed to the company by the developer of an app running on its platform.

Milner also told the committee that Cambridge Analytica could not have any Facebook data — yet last month the company admitted data on up to 87 million of its users had indeed been passed to the firm.

Schroepfer said he wasn’t sure whether Milner had been “specifically informed” about the agreement Facebook already had with Cambridge Analytica — adding: “I’m guessing he didn’t know”. He also claimed he had only himself become aware of it “within the last month”.

Who knows? Who knows about what the position was with Cambridge Analytica in February of this year? Who was in charge of this?” pressed one committee member.

“I don’t know all of the names of the people who knew that specific information at the time,” responded Schroepfer.

“We are a parliamentary committee. We went to Washington for evidence and we raised the issue of Cambridge Analytica. And Facebook concealed evidence to us as an organization on that day. Isn’t that the truth?” rejoined the committee member, pushing past Schroepfer’s claim to be “doing my best” to provide it with information.

A pattern of evasive behavior

“You are doing your best but the buck doesn’t stop with you does it? Where does the buck stop?”

“It stops with Mark,” replied Schroepfer — leading to a quick fire exchange where he was pressed about (and avoided answering) what Zuckerberg knew and why the Facebook founder wouldn’t come and answer the committee’s questions himself.

“What we want is the truth. We didn’t get the truth in February… Mr Schroepfer I remain to be convinced that your company has integrity,” was the pointed conclusion after a lengthy exchange on this.

“What’s been frustrating for us in this enquiry is a pattern of behavior from the company — an unwillingness to engage, and a desire to hold onto information and not disclose it,” said Collins, returning to the theme at another stage of the hearing — and also accusing Facebook of not providing it with “straight answers” in Washington.

“We wouldn’t be having this discussion now if this information hadn’t been brought into the light by investigative journalists,” he continued. “And Facebook even tried to stop that happening as well [referring to a threat by the company to sue the Guardian ahead of publication of its Cambridge Analytica exposé]… It’s a pattern of behavior, of seeking to pretend this isn’t happening.”

The committee expressed further dissatisfaction with Facebook immediately following the session, emphasizing that Schroepfer had “failed to answer fully on nearly 40 separate points”.

“Mr Schroepfer, Mark Zuckerberg’s right hand man whom we were assured could represent his views, today failed to answer many specific and detailed questions about Facebook’s business practices,” said Collins in a statement after the hearing.

“We will be asking him to respond in writing to the committee on these points; however, we are mindful that it took a global reputational crisis and three months for the company to follow up on questions we put to them in Washington D.C. on February 8

“We believe that, given the large number of outstanding questions for Facebook to answer, Mark Zuckerberg should still appear in front of the Committee… and will request that he appears in front of the DCMS Committee before the May 24.”

We reached out to Facebook for comment — but at the time of writing the company had not responded.

Palantir’s data use under review

Schroepfer was questioned on a wide range of topics during today’s session. And while he was fuzzy on many details, giving lots of partial answers and promises to “follow up”, one thing he did confirm was that Facebook board member Peter Thiel’s secretive big data analytics firm, Palantir, is one of the companies Facebook is investigating as part of a historical audit of app developers’ use of its platform.

Have there ever been concerns raised about Palantir’s activity, and about whether it has gained improper access to Facebook user data, asked Collins.

“I think we are looking at lots of different things now. Many people have raised that concern — and since it’s in the public discourse it’s obviously something we’re looking into,” said Schroepfer.

“But it’s part of the review work that Facebook’s doing?” pressed Collins.

“Correct,” he responded.

The historical app audit was announced in the wake of last month’s revelations about how much Facebook data Cambridge Analytica was given by app developer (and Cambridge University academic), Dr Aleksandr Kogan — in what the company couched as a “breach of trust”.

However Kogan, who testified to the committee earlier this week, argues he was just using Facebook’s platform as it was architected and intended to be used — going so far as to claim its developer terms are “not legally valid”. (“For you to break a policy it has to exist. And really be their policy, The reality is Facebook’s policy is unlikely to be their policy,” was Kogan’s construction, earning him a quip from a committee member that he “should be a professor of semantics”.)

Schroepfer said he disagreed with Kogan’s assessment that Facebook didn’t have a policy, saying the goal of the platform has been to foster social experiences — and that “those same tools, because they’re easy and great for the consumer, can go wrong”. So he did at least indirectly confirm Kogan’s general point that Facebook’s developer and user terms are at loggerheads.

“This is why we have gone through several iterations of the platform — where we have effectively locked down parts of the platform,” continued Schroepfer. “Which increases friction and makes it less easy for the consumer to use these things but does safeguard that data more. And been a lot more proactive in the review and enforcement of these things. So this wasn’t a lack of care… but I’ll tell you that our primary product is designed to help people share safety with a limited audience.

“If you want to say it to the world you can publish it on a blog or on Twitter. If you want to share it with your friends only, that’s the primary thing Facebook does. We violate that trust — and that data goes somewhere else — we’re sort of violating the core principles of our product. And that’s a big problem. And this is why I wanted to come to you personally today to talk about this because this is a serious issue.”

“You’re not just a neutral platform — you are players”

The same committee member, Paul Farrelly, who earlier pressed Kogan about why he hadn’t bothered to find out which political candidates stood to be the beneficiary of his data harvesting and processing activities for Cambridge Analytica, put it to Schroepfer that Facebook’s own actions in how it manages its business activities — and specifically because it embeds its own staff with political campaigns to help them use its tools — amounts to the company being “Dr Kogan writ large”.

“You’re not just a neutral platform — you are players,” said Farrelly.

“The clear thing is we don’t have an opinion on the outcome of these elections. That is not what we are trying to do. We are trying to offer services to any customer of ours who would like to know how to use our products better,” Schroepfer responded. “We have never turned away a political party because we didn’t want to help them win an election.

“We believe in strong open political discourse and what we’re trying to do is make sure that people can get their messages across.”

However in another exchange the Facebook exec appeared not to be aware of a basic tenet of UK election law — which prohibits campaign spending by foreign entities.

“How many UK Facebook users and Instagram users were contacted in the UK referendum by foreign, non-UK entities?” asked committee member Julie Elliott.

“We would have to understand and do the analysis of who — of all the ads run in that campaign — where is the location, the source of all of the different advertisers,” said Schroepfer, tailing off with a “so…” and without providing a figure. 

“But do you have that information?” pressed Elliott.

“I don’t have it on the top of my head. I can see if we can get you some more of it,” he responded.

“Our elections are very heavily regulated, and income or monies from other countries can’t be spent in our elections in any way shape or form,” she continued. “So I would have thought that you would have that information. Because your company will be aware of what our electoral law is.”

“Again I don’t have that information on me,” Schroepfer said — repeating the line that Facebook would “follow up with the relevant information”.

The Facebook CTO was also asked if the company could provide it with an archive of adverts that were run on its platform around the time of the Brexit referendum by Aggregate IQ — a Canadian data company that’s been linked to Cambridge Analytica/SCL, and which received £3.5M from leave campaign groups in the run up to the 2016 referendum (and has also been described by leave campaigners as instrumental to securing their win). It’s also under joint investigation by Canadian data watchdogs, along with Facebook.

In written evidence provided to the committee today Facebook says it has been helping ongoing investigations into “the Cambridge Analytica issue” that are being undertaken by the UK’s Electoral Commission and its data protection watchdog, the ICO. Here it writes that its records show AIQ spent “approximately $2M USD on ads from pages that appear to be associated with the 2016 Referendum”.

Schroepfer’s responses on several requests by the committee for historical samples of the referendum ads AIQ had run amounted to ‘we’ll see what we can do’ — with the exec cautioning that he wasn’t entirely sure how much data might have been retained.

“I think specifically in Aggregate IQ and Cambridge Analytica related to the UK referendum I believe we are producing more extensive information for both the Electoral Commission and the Information Commissioner,” he said at one point, adding it would also provide the committee with the same information if it’s legally able to. “I think we are trying to do — give them all the data we have on the ads and what they spent and what they’re like.”

Collins asked what would happen if an organization or an individual had used a Facebook ad account to target dark ads during the referendum and then taken down the page as soon as the campaign was over. “How would you be able to identify that activity had ever taken place?” he asked.

“I do believe, uh, we have — I would have to confirm, but there is a possibility that we have a separate system — a log of the ads that were run,” said Schroepfer, displaying some of the fuzziness that irritated the committee. “I know we would have the page itself if the page was still active. If they’d run prior campaigns and deleted the page we may retain some information about those ads — I don’t know the specifics, for example how detailed that information is, and how long retention is for that particular set of data.”

Dark ads a “major threat to democracy”

Collins pointed out that a big part of UK (and indeed US) election law relates to “declaration of spent”, before making the conjoined point that if someone is “hiding that spend” — i.e. by placing dark ads that only the recipient sees, and which can be taken offline immediately after the campaign — it smells like a major risk to the democratic process.

“If no one’s got the ability to audit that, that is a major threat to democracy,” warned Collins. “And would be a license for a major breach of election law.”

“Okay,” responded Schroepfer as if the risk had never crossed his mind before. “We can come back on the details on that.”

On the wider app audit that Facebook has committed to carrying out in the wake of the scandal, Schroepfer was also asked how it can audit apps or entities that are no longer on the platform — and he admitted this is “a challenge” and said Facebook won’t have “perfect information or detail”.

“This is going to be a challenge again because we’re dealing with historic events so we’re not going to have perfect information or detail on any of these things,” he said. “I think where we start is — it very well may be that this company is defunct but we can look at how they used the platform. Maybe there’s two people who used the app and they asked for relatively innocuous data — so the chance that that is a big issue is a lot lower than an app that was widely in circulation. So I think we can at least look at that sort of information. And try to chase down the trail.

“If we have concerns about it even if the company is defunct it’s possible we can find former employees of the company who might have more information about it. This starts with trying to identify where the issues might be and then run the trail down as much as we can. As you highlight, though, there are going to be limits to what we can find. But our goal is to understand this as best as we can.”

The committee also wanted to know if Facebook had set a deadline for completing the audit — but Schroepfer would only say it’s going “as fast as we can”.

He claimed Facebook is sharing “a tremendous amount of information” with the UK’s data protection watchdog — as it continues its (now) year-long investigation into the use of digital data for political purposes.

“I would guess we’re sharing information on this too,” he said in reference to app audit data. “I know that I personally shared a bunch of details on a variety of things we’re doing. And same with the Electoral Commission [which is investigating whether use of digital data and social media platforms broke campaign spending rules].”

In Schroepfer’s written evidence to the committee Facebook says it has unearthed some suggestive links between Cambridge Analytica/SCL and Aggegrate IQ: “In the course of our ongoing review, we also found certain billing and administration connections between SCL/Cambridge Analytica and AIQ”, it notes.

Both entities continue to deny any link exists between them, claiming they are entirely separate entities — though the former Cambridge Analytica employee turned whistleblower, Chris Wylie, has described AIQ as essentially the Canadian arm of SCL.

“The collaboration we saw was some billing and administrative contacts between the two of them, so you’d see similar people show up in each of the accounts,” said Schroepfer, when asked for more detail about what it had found, before declining to say anything else in a public setting on account of ongoing investigations — despite the committee pointing out other witnesses it has heard from have not held back on that front.

Another piece of information Facebook has included in the written evidence is the claim that it does not believe AIQ used Facebook data obtained via Kogan’s apps for targeting referendum ads — saying it used email address uploads for “many” of its ad campaigns during the referendum.

The data gathered through the TIYDL [Kogan’s thisisyourdigitallife] app did not include the email addresses of app installers or their friends. This means that AIQ could not have obtained these email addresses from the data TIYDL gathered from Facebook,” Facebook asserts. 

Schroepfer was questioned on this during the session and said that while there was some overlap in terms of individuals who had downloaded Kogan’s app and who had been in the audiences targeted by AIQ this was only 3-4% — which he claimed was statistically insignificant, based on comparing with other Facebook apps of similar popularity to Kogan’s.

“AIQ must have obtained these email addresses for British voters targeted in these campaigns from a different source,” is the company’s conclusion.

“We are investigating Mr Chancellor’s role right now”

The committee also asked several questions about Joseph Chancellor, the co-director of Kogan’s app company, GSR, who became an employee of Facebook in 2015 after he had left GSR. Its questions included what Chancellor’s exact role at Facebook is and why Kogan has been heavily criticized by the company yet his GSR co-director apparently remains gainfully employed by it.

Schroepfer initially claimed Facebook hadn’t known Chancellor was a director of GSR prior to employing him, in November 2015 — saying it had only become aware of that specific piece of his employment history in 2017.

But after a break in the hearing he ‘clarified’ this answer — adding: “In the recruiting process, people hiring him probably saw a CV and may have known he was part of GSR. Had someone known that — had we connected all the dots to when this thing happened with Mr Kogan, later on had he been mentioned in the documents that we signed with the Kogan party — no. Is it possible that someone knew about this and the right other people in the organization didn’t know about it, that is possible.”

A committee member then pressed him further. “We have evidence that shows that Facebook knew in November 2016 that Joseph Chancellor had formed the company, GSR, with Aleksandr Kogan which obviously then went on to provide the information to Cambridge Analytica. I’m very unclear as to why Facebook have taken such a very direct and critical line… with Kogan but have completely ignored Joseph Chancellor.”

At that point Schroepfer revealed Facebook is currently investigating Chancellor as a result of the data scandal.

“I understand your concern. We are investigating Mr Chancellor’s role right now,” he said. “There’s an employment investigation going on right now.

In terms of the work Chancellor is doing for Facebook, Schroepfer said he thought he had worked on VR for the company — but emphasized he has not been involved with “the platform”.

The issue of the NDA Kogan claimed Facebook had made him sign also came up. But Schroepfer counter claimed that this was not an NDA but just a “standard confidentiality clause” in the agreement to certify Kogan had deleted the Facebook data and its derivatives.

“We want him to be able to be open. We’re waiving any confidentiality there if that’s not clear from a legal standpoint,” he said later, clarifying it does not consider Kogan legally gagged.

Schroepfer also confirmed this agreement was signed with Kogan in June 2016, and said the “core commitments” were to confirm the deletion of data from himself and three others Kogan had passed it to: Former Cambridge Analytica CEO Alexander Nix; Wylie, for a company he had set up after leaving Cambridge Analytica; and Dr Michael Inzlicht from the Toronto Laboratory for Social Neuroscience (Kogan mentioned to the committee earlier this week he had also passed some of the Facebook data to a fellow academic in Canada).

Asked whether any payments had been made between Facebook and Kogan as part of the contract, Schroepfer said: “I believe there was no payment involved in this at all.”

‘Radical’ transparency, not regulation

Other issues raised by the committee included why Facebook does not provide an overall control or opt-out for political advertising; why it does not offer a separate feed for ads but chooses to embed them into the Newsfeed; how and why it gathers data on non-users; the addictiveness engineered into its product; what it does about fake accounts; why it hasn’t recruited more humans to help with the “challenges” of managing content on a platform that’s scaled so large; and aspects of its approach to GDPR compliance.

On the latter, Schroepfer was queried specifically on why Facebook had decided to shift the data controller of ~1.5BN non-EU international users from Ireland to the US. On this he claimed the GDPR’s stipulation that there be a “lead regulator” conflicts with Facebook’s desire to be more responsive to local concerns in its non-EU international markets.

“US law does not have a notion of a lead regulator so the US does not become the lead regulator — it opens up the opportunity for us to have local markets have them, regions, be the lead and final regulator for the users in that area,” he claimed.

Asked whether he thinks the time has come for “robust regulation and empowerment of consumers over their information”, Schroepfer demurred that new regulation is needed to control data flowing over consumer platforms. “Whether, through regulation or not, making sure consumers have visibility, control and can access and take their information with you, I agree 100%,” he said, agreeing only to further self-regulation not to the need for new laws.

“In terms of regulation there are multiple laws and regulatory bodies that we are under the guise of right now. Obviously the GDPR is coming into effect just next month. We have been regulated in Europe by the Irish DPC whose done extensive audits of our systems over multiple years. In the US we’re regulated by the FTC, Privacy Commissioner in Canada and others. So I think the question isn’t ‘if’, the question is honestly how do we ensure the regulations and the practices achieve the goals you want. Which is consumers have safety, they have transparency, they understand how this stuff works, and they have control.

“And the details of implementing that is where all the really hard work is.”

His stock response to the committee’s concerns about divisive political ads was that Facebook believes “radical transparency” is the fix — also dropping one tidbit of extra news on that front in his written testimony by saying Facebook will roll out an authentication process for political advertisers in the UK in time for the local elections in May 2019.

Ads will also be required to be labeled as “political” and disclose who paid for the ad. And there will be a searchable archive — available for seven years — which will include the ads themselves plus some associated data (such as how many times an ad may have been seen, how much money was spent, and the kinds of people who saw it).

Collins asked Schroepfer whether Facebook’s ad transparency measures will also include “targeting data” — i.e. “will I understand not just who the advertiser was and what other adverts they’d run but why they’d chose to advertise to me”?

“I believe among the things you’ll see is spend (how much was spent on this ad); you will see who they were trying to advertise to (what is the audience they were trying to reach); and I believe you will also be able to see some basic information on how much it was viewed,” Schroepfer replied — avoiding yet another straight answer.



from Social – TechCrunch https://ift.tt/2HZmDA9