Onkel Neal
11-05-17, 09:08 AM
Over the week I listened to several news stories and panel discussions over the ongoing problems and the impact of professional trolls using Twitter and Facebook to shape US opinion. If you haven't caught this, the big social media companies were on Capitol Hill (https://www.theatlantic.com/technology/archive/2017/11/a-list-of-what-we-really-learned-during-techs-congressional-hearings/544730/) explaining how Russian troll farms were able to saturate FB and Twitter with fake accounts, pushing fake news and manufactured outrage in order to inflame and direct US citizens. Basically, fake users were creating content that was mistaken by real people and causing real problems.
Senator Warner opened his committee remarks describing a bizarre moment when two Russian troll groups created competing events on May 21, 2016, at an Islamic center. The Heart of Texas page created a Facebook event “to stop the Islamization of Texas,” while the United Muslims of America created an event at the Islamic center. People who’d seen the event listing showed up on both sides, and it was not a friendly encounter.
This kind of "sowing discord" into the affairs of our citizens needs to be taken seriously. It's one thing for two neighbors to dislike and distrust each other, but if "friends" are whispering that the other guy is threatening to take action, conflict can occur where it otherwise would not.
'Kill them all' -- Russian-linked Facebook accounts called for violence (http://money.cnn.com/2017/10/31/media/russia-facebook-violence/index.html)
Facebook accounts run by Russian trolls repeatedly called for violence against different social and political groups in the U.S., including police officers, Black Lives Matter activists and undocumented immigrants.
Posts from three now-removed Facebook groups created by the Russian Internet Research Agency suggest Russia sought not only to meddle in U.S. politics but to encourage ideologically opposed groups to act out violently against one another. The posts are part of a database compiled by Jonathan Albright, the research director at Columbia University's Tow Center for Digital Journalism, who tracks and analyzes Russian propaganda.
For example, "Being Patriotic," a group that regularly posted content praising Donald Trump's candidacy, stated in an April 2016 post that Black Lives Matter activists who disrespected the American flag should be "be immediately shot." The account accrued about 200,000 followers before it was shut down.
Another Russia-linked group, "Blacktivist," described police brutality in a November 2016 post weeks after the election, and stated, "Black people have to do something. An eye for an eye. The law enforcement officers keep harassing and killing us without consequences."
No doubt, a lot of people around the world see these social media "fake groups" and fake users and judge the rest of us Americans poorly.
And then a disgruntled Twitter employee turned off Trump's account, which even if you think that's great, is a worry. Twitter has a lot of power and they apparently have no safeguards that can prevent malfeasance. Imagine if the guy had mad a few posts like "We are going to launch an attack on North Korea now! (and it will be beautiful)" .... so, yeah, social media, meet upcoming government regulations.
Naturally there is a lot of discussion by lawmakers of making big changes to the way the internet works and building more transparency and accountability into the mechanics of the web. I'm all for this. Anonymity is one thing, but having no reliable structure to manage bad actors is a potential disaster. If jihadists are posting dangerous content, an internet "police"/watchdog force should be able to pin down the source of this content and block it from appearing. If there is a fake account on FB, it should be traceable to a real geographic location and a real hardware ID. Masking proxies and that kind of thing need to be written out through technology. Maybe this is already possible, I don't know. China seems to have a handle on it. I know for myself, it would be nice if Google and Yahoo found a way to stop handing out millions of email accounts for click farms to spam Subsim. And it would be nice if I as an admin had a way to shut the door on all traffic from certain regions, the ones where all the spam generates.
It will be interesting to see where this all goes. In ten years will we look back on this period as the "old wild west" days of the internet, before it was "fixed"?
Senator Warner opened his committee remarks describing a bizarre moment when two Russian troll groups created competing events on May 21, 2016, at an Islamic center. The Heart of Texas page created a Facebook event “to stop the Islamization of Texas,” while the United Muslims of America created an event at the Islamic center. People who’d seen the event listing showed up on both sides, and it was not a friendly encounter.
This kind of "sowing discord" into the affairs of our citizens needs to be taken seriously. It's one thing for two neighbors to dislike and distrust each other, but if "friends" are whispering that the other guy is threatening to take action, conflict can occur where it otherwise would not.
'Kill them all' -- Russian-linked Facebook accounts called for violence (http://money.cnn.com/2017/10/31/media/russia-facebook-violence/index.html)
Facebook accounts run by Russian trolls repeatedly called for violence against different social and political groups in the U.S., including police officers, Black Lives Matter activists and undocumented immigrants.
Posts from three now-removed Facebook groups created by the Russian Internet Research Agency suggest Russia sought not only to meddle in U.S. politics but to encourage ideologically opposed groups to act out violently against one another. The posts are part of a database compiled by Jonathan Albright, the research director at Columbia University's Tow Center for Digital Journalism, who tracks and analyzes Russian propaganda.
For example, "Being Patriotic," a group that regularly posted content praising Donald Trump's candidacy, stated in an April 2016 post that Black Lives Matter activists who disrespected the American flag should be "be immediately shot." The account accrued about 200,000 followers before it was shut down.
Another Russia-linked group, "Blacktivist," described police brutality in a November 2016 post weeks after the election, and stated, "Black people have to do something. An eye for an eye. The law enforcement officers keep harassing and killing us without consequences."
No doubt, a lot of people around the world see these social media "fake groups" and fake users and judge the rest of us Americans poorly.
And then a disgruntled Twitter employee turned off Trump's account, which even if you think that's great, is a worry. Twitter has a lot of power and they apparently have no safeguards that can prevent malfeasance. Imagine if the guy had mad a few posts like "We are going to launch an attack on North Korea now! (and it will be beautiful)" .... so, yeah, social media, meet upcoming government regulations.
Naturally there is a lot of discussion by lawmakers of making big changes to the way the internet works and building more transparency and accountability into the mechanics of the web. I'm all for this. Anonymity is one thing, but having no reliable structure to manage bad actors is a potential disaster. If jihadists are posting dangerous content, an internet "police"/watchdog force should be able to pin down the source of this content and block it from appearing. If there is a fake account on FB, it should be traceable to a real geographic location and a real hardware ID. Masking proxies and that kind of thing need to be written out through technology. Maybe this is already possible, I don't know. China seems to have a handle on it. I know for myself, it would be nice if Google and Yahoo found a way to stop handing out millions of email accounts for click farms to spam Subsim. And it would be nice if I as an admin had a way to shut the door on all traffic from certain regions, the ones where all the spam generates.
It will be interesting to see where this all goes. In ten years will we look back on this period as the "old wild west" days of the internet, before it was "fixed"?