Is Programmatic to Blame for Trump & Brexit?

politics_word_cloud

The likes of Google and Facebook have admitted that fake news stories shared and made accessible via their platforms influenced the result of the American Presidential race to at least some extent. Much has been said about what these two companies intend to do to prevent/label/remove such content but not a lot has been said about people’s motives for creating and sharing such things.

Now I’m sure that some content was produced and shared by supporters on one side or another to deliberately deceive others (you only have to look at the tweets that were going around trying to convince people they could vote on Twitter in order to see this) and swing voters opinions directly. However many of the mainstream media stories fail to mention or don’t give a lot of time to the other reason: they can make money from advertising placed on these articles.

The fact that the big media, sorry tech, companies are vowing to help fix this is a little odd given their previous responses to the, technically at least, very similar issue of online music and film piracy. For this issue companies said that actively policing all the links and the content on their site was impossible or impractical. Legal mechanisms like the DMCA Safe Harbour rules (and other similar local versions) were created to try to reach a compromise over who is responsible for managing this. Very basically if notified of infringing content it has to be removed quickly and it can be argued about later (#notalawyer). The way sites that actually host the illegal content are funded is once again through advertising money.

What’s more confusing is that blocking “news” is more likely to be accidentally considered as censorship or infringing on free speech rights than removing a link to Coldplay’s new album.

One of the key ways internet piracy is tackled now is by cutting off the ad money going to the people behind it. Content Verification (CV) is utilised at various points in the adserving chain to prevent ads from appearing and ultimately the website owners getting paid. Many CV providers have a range of content classifications encompassing things like “adult content”, “drugs”, “file sharing”, etc. They utilise blacklists/whitelists along with technology to scan the URL and the page content to label pages and websites. Many also have a classification for “news” sometimes with a breakdown to the type of news such as “Transportation Accidents”. Brands obviously don’t like their ads to appear right next to a news article about their latest misdemeanour…

I’ve yet to see a CV provider talk about trying to classify “Fake News” though. The algorithms required to work out if a news story is entirely factual, hyped up opinion or utterly fabricated would be very difficult to write indeed. I suspect that for social networks they may be able to use the comments posted on their site to detect controversy more easily which could lead to better categorisation.

Maybe we will have to go back to what happened in the early days of content verification which was to have the new team member sift through page after page of questionable content deciding quite how bad it is. I know some people still mentally scared from this exercise!

I can also imagine some cooler brands maybe embracing advertising on such content with copy like “Not everything online is high quality but our sale items sure are!” (#notacopywritereither).

Leave a reply

Your email address will not be published.

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>