E se a internet no Brasil pudesse ser controlada?

Cintia Alves
Cintia Alves é graduada em jornalismo (2012) e pós-graduada em Gestão de Mídias Digitais (2018). Certificada em treinamento executivo para jornalistas (2023) pela Craig Newmark Graduate School of Journalism, da CUNY (The City University of New York). É editora e atua no Jornal GGN desde 2014.
[email protected]

https://www.youtube.com/watch?v=18ao1F7lz8A&feature=youtu.be width:700 height:394

Jornal GGN – No primeiro episódio da produção original da TV Drone/Actantes, “XPLOIT: Internet Sob Ataque”, mostra como o Legislativo tem dado espaço a projetos de lei que contornam ou minam os princípios cilivizatórios definidos pelo Marco Civil da Internet. Invasão de privacidade, censura prévia, vigilância em massa são temas em pauta. A produção foi feita em parceria com a Heinrich Böll Stiftung e colaboração da Rede TVT que vai abordar os ataques as liberdades civis na internet e via internet no Brasil e no mundo.

2 Comentários

Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

  1. O último estágio da

    O último estágio da instauração de um regime totalitário, neste século, será o controle total da internet. Teremos uma visão desse processo, no Brasil, nos próximos quatro anos.

  2. Como as campanhas políticas se armaram de socialbots
    Artigo interessante sobre como conquistar votos na internet com uso de robos algorítmicos , em Inglês : 18 Oct 2018 | 15:00 GMT

    How Political Campaigns Weaponize Social Media Bots

    Analysis of computational propaganda in the 2016 U.S. presidential election reveals the reach of bots

    By Philip N. HowardAdvertisementEditor’s PicksRoger McNamee gestures while speaking to attendees of Techonomy17.

    Tech Leaders Dismayed by Weaponization of Social Media

    Photo-illustration: iStockphoto

    AI-Human Partnerships Tackle “Fake News”

    Illustration of thumbs down and tweet bird made up of American election related icons.

    Will Foreign Agents Rig the U.S. Midterm Elections Through Social Media?

    opening illustrationIllustration: Jude Buffum

    In the summer of 2017, a group of young political activists in the United Kingdom figured out how to use the popular dating app Tinder to attract new supporters. They understood how Tinder’s social networking platform worked, how its users tended to use the app, and how its algorithms distributed content, and so they built a bot to automate flirty exchanges with real people. Over time, those flirty conversations would turn to politics—and to the strengths of the U.K.’s Labour Party.

    To send its messages, the bot would take over a Tinder profile owned by a Labour-friendly user who’d agreed to the temporary repurposing of his or her account. Eventually, the bot sent somewhere between 30,000 and 40,000 messages, targeting 18- to 25-year-olds in constituencies where the Labour candidates were running in tight races. It’s impossible to know precisely how many votes are won through social media campaigns, but in several targeted districts, the Labour Party did prevail by just a few votes. In celebrating their victory, campaigners took to Twitter to thank their team—with a special nod to the Tinder election bot.

    How a Political Social Media Bot Works

    infographic Illustration: Jude Buffum

    1. The bot automatically sets up an account on a social media platform. 2. The bot’s account may appear to be that of an actual person, with personal details and even family photos. 3. The bot crawls through content on the site, scanning for posts and comments of interest. 4. The bot posts its own content to engage other human users. 5. Networks of bots act in concert to promote a candidate or message, to muddy political debate, or to disrupt support for an opponent.

    By now, it’s no surprise that social media is one of the most widely used applications online. Close to 70 percent of U.S. adults are on Facebook, with three-quarters of that group using it at least once a day. To be sure, most of the time people aren’t using Facebook, Instagram, and other apps for politics but for self-expression, sharing content, and finding articles and video.

    But with social media so deeply embedded in people’s lives and so unregulated, trusted, and targetable, these platforms weren’t going to be ignored by political operators for long. And there is mounting evidence that social media is being used to manipulate and deceive voters and thus to degrade public life.

    To be sure, the technology doesn’t always have this effect. It’s difficult to tell the story of the Arab Spring [PDF] without acknowledging how social media platforms allowed democracy advocates to coordinate themselves in surprising new ways, and to send their inspiring calls for political change cascading across North Africa and the Middle East.

    But the highly automated nature of news feeds also makes it easy for political actors to manipulate those social networks. Studies done by my group at the Oxford Internet Institute’s Computational Propaganda Research Projecthave found, for example, that about half of Twitter conversations originating in Russia [PDF] involve highly automated accounts. Such accounts push out vast amounts of political content, and many are so well programmed that the targets never realize that they’re chatting with a piece of software.

    We’ve also discovered that professional trolls and bots have been aggressively used in Brazil[PDF] during two presidential campaigns, one presidential impeachment campaign, and the mayoral race in Rio. We’ve seen that political leaders in many young democracies are actively using automation to spread misinformation and junk news.

    And in the United States, we have found evidence that active-duty military personnel have been targeted [PDF] with misinformation on national security issues and that the dissemination of junk news was concentrated in swing states [PDF] during the U.S. presidential election in 2016.

    The earliest reports of organized social media manipulation emerged in 2010. So in less than a decade, social media has become an ever-evolving tool for social control, exploited by crafty political operatives and unapologetic autocrats. Can democracy survive such sophisticated propaganda?

    The 2016 U.S. presidential election was a watershed moment in the evolution of computational techniques for spreading political propaganda via social networks. Initially, the operators of the platforms failed to appreciate what they were up against. When Facebook was first asked how the Russian government may have contributed to the Trump campaign, the company dismissed such foreign interference as negligible. Some months later, Facebook recharacterized the influence as minimal, with only 3,000 ads costing US $100,000 linked to some 470 accounts.

    Tactics of Political Social Media Bots

    illustrationIllustration: Jude BuffumZombie Electioneering: Gives the appearance of broad support for an issue or candidate through automated commenting, scripted dialogues, and other means

    Finally, in late October 2017, nearly a year after the election, Facebook revealed that Russia’s propaganda machine had actually reached 126 million Facebook users with its ad campaign. What’s more, the Internet Research Agency, a shadowy Russian company linked to the Kremlin, posted roughly 80,000 pieces of divisive content on Facebook, which reached about 29 million U.S. users between January 2015 and August 2017.

    Facebook was not the only social media platform affected. Foreign agents published more than 131,000 tweets from 2,700 Twitter accounts [PDF] and uploaded over 1,100 videos [PDF] to Google’s YouTube.

    What propagandists love about social media is a network structure that’s ripe for abuse. Each platform’s distributed system of users operates largely without editors. There is nobody to control the production and circulation of content, to maintain quality, or to check the facts.

    The propagandists can fool a few key people, and then stand back and let them do most of the work. The Facebook posts from the Internet Research Agency, for instance, were liked, shared, and followed by authentic users, which allowed the posts to organically spread to tens of millions of others.

    Facebook eventually shut down the accounts where the Internet Research Agency posts originated, along with more than 170 suspicious accounts on its photo-sharing app, Instagram. Each of these accounts was designed to look like that of a real social media user, a real neighbor, or a real voter, and engineered to distribute disinformation and divisive messages to unsuspecting users’ news feeds. The Facebook algorithm aids this process by identifying popular posts—those that have been widely liked, shared, and followed—and helping them to go viral by placing them in the news feeds of more people.

    illustrationIllustration: Jude BuffumAstroTurf Campaign: Makes an electoral or legislative campaign appear to be a grassroots effort

    As research by our group and others has revealed, computational propaganda takes many forms: networks of highly automated Twitter accounts; fake users on Facebook, YouTube, and Instagram; chatbots on Tinder, Snapchat, and Reddit. Often the people running these campaigns find ways to game the algorithms that the social media platforms use to distribute news.

    Doing so usually means breaking terms-of-service agreements, violating community norms, and otherwise using the platforms in ways that their designers didn’t intend. It may also mean running afoul of election guidelines, privacy regulations, or consumer protection rules. But it happens anyway.

     

    Another common tactic is to simply pay for advertising and take advantage of the extensive marketing services that social media companies offer their advertisers. These services let buyers precisely target their audience according to thousands of different parameters—not just basic information, such as location, age, and gender, but also more nuanced attributes, including political beliefs, relationship status, finances, purchasing history, and the like. Facebook recently removed more than 5,000 of these categories to discourage discriminatory job ads—which gives you an idea of how many categories there are in total.

    illustrationImages: Oxford Internet InstituteElection Botnets: During the November 2016 U.S. election, the largest Trump Twitter botnet [right] consisted of 944 bots, compared with 264 bots in the largest pro-Clinton botnet [left]. What’s more, the Trump botnet was more centralized and interconnected, suggesting a higher degree of strategic organization.

    One of the chief ways to track political social media manipulation is to look at the hashtags that both human users and bots use to tag their messages and posts. The main hashtags will reference candidates’ names, party affiliations, and the big campaign issues and themes—#TrumpPence, #LivingWage, #Hillary2016, and so on. An obvious shortcoming of this approach is that we don’t know in advance which hashtags will prove most popular, and so we may miss political conversations that either have hashtags that emerged later in the campaign or that don’t carry any hashtag.

    Nonetheless, we can use the hashtags that we do know to identify networks of highly automated accounts. Twitter data is for the most part public, so we can periodically access it directly through the company’s application programming interface (API), which is a type of server that connects Twitter with its developers and other customers. For a 10-day period starting on 1 November 2016, we collected about 17 million tweets from 1,798,127 users. We also sampled Twitter data during each of the three presidential debates.

    illustrationIllustration: Jude BuffumHashtag Hijacking: Appropriates an opponent’s hashtag to distribute spam or otherwise undermine support

    Sifting through the data, we saw patterns in who was liking and retweeting posts, which candidates were getting the most social media traffic, how much of that traffic came from highly automated accounts, and what sources of political news and information were being used. We constructed a retweeting network that included only connections where a human user retweeted a bot. This network consisted of 15,904 humans and 695 bots. The average human user in this network shared information from a bot five times.

    We then focused on accounts that were behaving badly. Bots aren’t sinister in themselves, of course. They’re just bits of software used to automate and scale up repetitive processes, such as following, linking, replying, and tagging on social media. But they can nevertheless affect public discourse by pushing content from extremist, conspiratorial, or sensationalist sources, or by pumping out thousands of pro-candidate or anti-opponent tweets a day. These automated actions can give the false impression of a groundswell of support, muddy public debate, or overwhelm the opponent’s own messages.

    We have found that accounts tweeting more than 50 times a day using a political hashtag are almost invariably bots or accounts that mix automated techniques with occasional human curation. Very few humans—even journalists and politicians—can consistently generate dozens of fresh political tweets each day for days on end.

    Once we’ve identified individual bots, we can map the bot networks—bots that follow each other and act in concert, often exactly reproducing content coming from one another. In our modeling of Twitter interactions, the individual accounts represented the network’s nodes, and retweets represented the network’s connections.

    twitter screenshotGratitude: Campaign workers tweeted about their successful chatbot, which rallied support for the UK Labour Party through automated conversations on the dating platform Tinder.

    What did we learn about the 2016 U.S. election? Both of the major presidential candidates attracted networks of automated Twitter accounts that pushed around their content. Our team mapped these botnet structures over time by tracking the retweeting of the most prominent hashtags—Clinton-related and Trump-related as well as politically neutral.

    The Trump and Clinton bot networks looked and behaved very differently, as can be seen in the illustration “Election Botnets,” which depicts the largest botnet associated with each campaign. The much larger Trump botnet consisted of 944 bots and was highly centralized and interconnected, suggesting a greater degree of strategic organization and coordination. The Clinton botnet had just 264 bots and was more randomly arranged and diffuse, suggesting more organic growth.

    The pro-Trump Twitter botnets were also far more prolific during the three presidential debates. After each debate, highly automated accounts supporting both Clinton and Trump tweeted about their candidate’s victory. But on average, pro-Trump automated accounts released seven tweets for every tweet from a pro-Clinton automated account. The pro-Trump botnets grew more active in the hours leading up to the final debate, some of them declaring Trump the winner even before the debate had started.

    illustrationIllustration: Jude BuffumRetweet Storm: Simultaneous reposts or retweets of a post by hundreds or thousands of other bots

    Another successful strategy for the Trump botnets was strategically colonizing pro-Clinton hashtags by using them in anti-Clinton messages. For the most part, each candidate’s human and bot followers used particular hashtags associated with their candidate. But Trump followers tended to also mix in Clinton hashtags. By Election Day, about a quarter of the pro-Trump Twitter traffic was being generated by highly automated accounts, and about a fifth of those tweets contained both Clinton and Trump hashtags. This resulted in negative messages generated by Trump’s supporters (using such hashtags as #benghazi, #CrookedHillary, #lockherup) being injected into the stream of positive messages being traded by Clinton supporters (tagged with #Clinton, #hillarysupporter, and the like).

    Finally, we noticed that most of the bots went into hibernation immediately following the election. In general, social media bots tend to have a clear rhythm of content production. Bots that work in concert with humans will be active in the day and dormant at night. More automated bots will be front-loaded with content and then push out messages around the clock. A day after the election, these same bots, which had been pumping out hundreds of posts a day, fell silent. Whoever was behind them had switched them off. Their job was done.

     

    In the run-up to the U.S. midterm election, the big question is not whether social media will be exploited to manipulate voters, but rather what new tricks and tactics and what new actors will emerge. In August, Facebook announced that it had already shut down Iranian and Russian botnets trying to undermine the U.S. elections. As such activity tends to spike in the month or so right before an election, we can be certain that won’t be the end of it.

    Meanwhile, Twitter, Facebook, and other social media platforms have implemented a number of new practices to try to curtail political manipulation on their platforms. Facebook, for example, disabled over 1 billion fake accounts, and its safety and security team has doubled to more than 20,000 people handling content in 50 languages. Twitter reports that it blocks half a million suspicious log-ins per day. Social media companies are also investing in machine learning and artificial intelligence that can automatically spot and remove “fake news” and other undesirable activity.

    illustrationIllustration: Jude BuffumStrategic Flagging: Tools intended to flag inappropriate content are instead used to flag an opponent’s legitimate content, which may then be erroneously deleted by a social media platform

    But the problem is now a global one. In 2017, our researchers inventoried international trends in computational propaganda [PDF], and we were surprised to find organized ventures in each of the 28 countries we looked at. Every authoritarian regime in the sample targeted its own citizens with social media campaigns, but only a few targeted other countries. By contrast, almost every democratic country in the sample conducted such campaigns to try to influence other countries.

    In a follow-up survey of 48 countries, we again saw political social media manipulation in every country in our sample. We are also seeing tactics spreading from one campaign cycle or political consultant or regime to another.

    Voters have always relied on many sources of political information; family, friends, news organizations, and charismatic politicians obviously predate the Internet. The difference now is that social media platforms provide the structure for political conversation. And when these technologies permit too much fake news and divisive messages and encourage our herding instinct, they undermine democratic processes without regard for the public good.

    We haven’t yet seen true artificial intelligence applied to the production of political messages. The prospect of armies of AI bots that more closely mimic human users, and therefore resist detection, is both worrisome and probably inevitable.

    Protecting democracy from social media manipulation will require some sort of public policy oversight. Social media companies cannot be expected to regulate themselves. They are in the business of selling information about their users to advertisers and others, information they gather through the conversations that take place on their platforms. Filtering and policing that content will cause their traffic to shrink, their expenses to rise, and their revenues to fall.

    To defend our democratic institutions, we need to continue to independently evaluate social media practices as they evolve, and then implement policies that protect legitimate discourse. Above all, we need to stay vigilant, because the real threats to democracy still lie ahead.

    This article appears in the November 2018 print issue as “The Rise of Computational Propaganda.”

    To Probe Further

    For further details on social media manipulation in political campaigns, see

    Computational Propaganda in Russia: The Origins of Digital Misinformation,” by Sergey Sanovich (June 2017)“Junk News on Military Affairs and National Security: Social Media Disinformation Campaigns Against U.S. Military Personnel and Veterans,” by John D. Gallacher, Vlad Barash, Philip N. Howard, and John Kelly (October 2017)“Polarization, Partisanship and Junk News Consumption Over Social Media in the U.S.,” by Vidya Narayanan, Vlad Barash, John Kelly, Bence Kollanyi, Lisa-Maria Neudert, and Philip N. Howard (February 2018)“Computational Propaganda in the United States of America: Manufacturing Consensus Online,” [PDF] by Samuel C. Woolley and Douglas Guilbeault (2017)“Algorithms, Bots, and Political Communication in the U.S. 2016 Election: The Challenge of Automated Political Communication for Election Law and Administration,” by Philip N. Howard, Samuel Woolley, and Ryan Calo, Journal of Information Technology & Politics (April 2018)“Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation,” by Samantha Bradshaw and Philip N. Howard (2018)

    About the Author

    Philip N. Howard is the director of the Oxford Internet Institute at the University of Oxford and principal investigator of the Computational Propaganda Project.

     

Você pode fazer o Jornal GGN ser cada vez melhor.

Apoie e faça parte desta caminhada para que ele se torne um veículo cada vez mais respeitado e forte.

Seja um apoiador