Tuesday, March 29, 2011

The question of the role of social media internet companies as protesters use them to communicate on their protests and governments exploded on the world stage in the context of the protests going in the Middle East in early 2011. Lest it be presumed that the firms’ respective policies are relevant only in terms of what content (or users) is allowed and how that content could impact events on the ground, the policies themselves reflect on the claim made by the West that we are the land of the free. In other words, if social media companies are (allowed to be) oppressive or otherwise not respectful of their customers, the overall message to the oppressed in the Middle East cannot be that greater freedom is indeed possible because it exists here. Lest our own private sector unwittingly undercut the words and efforts of the protesters, we might want to use this case to ask if we couldn’t be freer too.
According to Ebele Okobi-Harris, the director of the business and human rights program at Yahoo, which owns Flickr, the case of el-Hamalwy, an Egyptian activist whose uploaded pictures of security agents were abruptly taken down by Flickr staff, has prompted internal discussions about whether Flickr should reconsider its approach. What if the photos had been his own and he had not yet backed them up? Flickr’s abrupt and unannounced action suddenly seems quite oppressive. Fortunately, managers at Flickr are at least thinking about the issue. “As the uses of these social networks evolve,” Harris said, “we have to start thinking about how to create rules on how to apply rules that also facilitate human rights activists using these tools.”
According to The New York Times, Harris “pointed to the challenges of balancing the existing rules and terms of service for users with the new ways that activists are using these tools. One challenge is whether a company should maintain its commitment to remain neutral about content, even when politicized content could offend users or even put people in danger. ‘Does a company take responsibility for the content?’” Okobi-Harris asked. For instance, what, el-Hamalawy asks, would Flickr do if a group that opposes abortion wanted to post photographs of doctors who perform abortions? In his own case, el-Hamalawy “said Flickr’s decision to take down the photos left him not only frustrated and angry but also terrified. ‘Everyone knew that I had released those photos,’ he said. ‘Then the photos were gone. I couldn’t sleep. I was thinking that at any minute, they were going to come for me.’” Would Flickr managers be responsible for el-Hamalawy’s death if it was occasioned by Flickr’s action?  Or was it his own act in uploading the photos in the first place that put his life at risk? To be sure, Flickr should have notified him before taking down his pictures; the company is responsible for causing him fear. However, this seems more like bad business than unethical conduct on Flickr’s part. Whether a customer is a protester oppressed by a dictatorship or simply a novice photographer who has uploaded her own pictures, there does appear to be reason to withhold one’s trust from Flickr staff.
Beyond the matter of bad customer relations—which seems to be ubiquitous in American business—the question of whether social media, which includes Facebook, Flickr, Twitter and YouTube among others—is unwittingly biased toward oppressive governments even if only from a desire to maintain control over its site must be addressed. In early 2011, it became clear that such companies were increasingly being used by activists and pro-democracy forces, especially in the Middle East and North Africa, to the notice of their governments. As Harris asked of Flickr, does a social media company have responsibility for the content?  Furthermore, it should be asked whether such a company should be susceptible to the influence of governments, whether in identifying users or barring their content?
According to The New York Times, the “new role for social media has put these companies in a difficult position: how to accommodate the growing use for political purposes while appearing neutral and maintaining the practices and policies that made these services popular in the first place.” The New York Times reports that “YouTube was one of the first social media networks to wrestle with content posted by a human rights advocate that conflicted with its terms of service. In November 2007, YouTube removed videos flagged as “inappropriate” by a community member that showed a person in Egypt being tortured by the police. They were uploaded by Wael Abbas, another Egyptian blogger involved in opposing torture in Egypt. After a public outcry, YouTube staff members reviewed the videos and restored them.” Had YouTube managers been influenced by Egypt in taking down the video, the company would have effectively taken sides in the Egyptian dispute between its government and people. Absent such pressure, the issue could simply be whether a warning notice is appropriate given the graphic nature of the violence being shown. I made the horrible mistake, for example, of watching the slow beheading of a Western hostage by a terrorist group in the Middle East. Even a year or two later, I can still hear the man’s raspy voice shouting for dear life as his throat was being deprived of air while his murderers pathologically invoked their deity’s name as if their act had been sanctioned by a power higher than, and thus somehow justifying, their own anger and resentment. Because I looked ignored the “graphic content” warning (as they are perhaps too commonly used), perhaps the issue facing YouTube does indeed go beyond whether such a warning should apply. In my case, my curiosity got the better of me. Should YouTube have been responsible for protecting me from myself? This seems like a tall order, especially as it could invite the staff to discriminate between content based on their political or ideological positions. It could be that the staff could limit their intervention only to extremely graphic content, with review taking place in the company in the particularly cases.  Still, in a free society, citizens ultimately must take responsibility for ignoring warnings; I don’t believe that a government or private company can protect citizens from themselves, even if it can protect us from others of us who would seek to harm, mislead or cheat us.
Regarding Facebook, The New York Times reported on March 26, 2011, the company “has remained mostly quiet about its increasing role among activists in the Middle East who use the site to connect dissident groups, spread information about government activities and mobilize protests. But Facebook is now finding itself drawn into the Israeli-Palestinian conflict and has been pushed to defend its neutral approach and terms of service to some supporters of Israel, including an Israeli government official. Yuli Edelstein, an Israeli minister of diplomacy and diaspora affairs, sent a letter [in March] to Facebook’s chief executive, Mark Zuckerberg, asking him to remove a Facebook page created on March 6 named the Third Palestinian Intifada. The page, which calls for an uprising in the occupied Palestinian territory in May, has more than 240,000 members. ‘As Facebook’s C.E.O. and founder, you are obviously aware of the site’s great potential to rally the masses around good causes, and we are all thankful for that,’ Mr. Edelstein wrote. ‘However, such potential comes hand in hand with the ability to cause great harm, such as in the case of the wild incitement displayed on the above-mentioned page.’ Facebook has, so far, not removed the page. The administrators are not advocating violence, and therefore, it falls within the company’s definition of acceptable speech, company officials said. ‘We want Facebook to be a place where people can openly discuss issues and express their views, while respecting the rights and feelings of others,’said Andrew Noyes, a spokesman for public policy at the company.”
“Wild incitement” can pertain to the pro-democracy rallies that had been taking place throughout the Middle East. Even if violence were being called for in the Intifada, would Facebook (or Twitter) remove such content if it had been put up by an Egyptian or Libyan protester?  More pointedly, what if a page or tweet referred to “wild incitement” in the midst of being attacked by government troops or police? How far removed is an occupied people to such intimidation on a daily basis? Should they be barred from tweeting, “Come help me at X intersection b/c police are beating my elderly parents”? The staff at Facebook are smart not to intervene in disputes between a government and its people. If anything, an America-based company has a basis in taking the side of the oppressed, for the United States came into existence from British oppression. Relatedly, the U.S. Government acts in concert with its own beginning whenever it takes the side of a people protesting against governmental oppression.
Even if Facebook does not intervene to censor content, it is possible, even likely, that particular policies are inherently in the advantage of vengeful government agencies and a threat to Facebook’s customers. For example, The New York Times reports that “Human rights advocates have also criticized Facebook for not being more flexible with some of its policies, specifically its rule requiring users to create accounts with their real names. Danny O’Brien, the Internet advocacy coordinator for the Committee to Protect Journalists, cited the case of Michael Anti, an independent journalist and blogger from China whose Facebook account was deactivated in January because he had not used his state-given name to create it. In addition to losing the ability to publish and communicate on Facebook, and not wanting to use his real name because of China’s strict rules governing freedom of speech and harsh response to those activists who violate them, he has lost the contact information for thousands of people in his Facebook community. ‘One can’t expect all of these services to provide everything to everyone,’ said Mr. O’Brien. ‘I think that part of the solution is to provide people with a dignified way of leaving the service.’”
O’Brien was giving too much to Facebook. It is insufficient to expect Facebook to merely provide its customers with a dignified way to leave (or be deprived of service). In addition, Facebook ought to respect the preference of some of its customers to anonymity. Facebook’s staff would still have those customers’ contact info (more of which could be demanded and verified in such cases), so anonymity would not be an excuse to get away with unethical or illegal conduct, such as publically defaming someone by making false claims. At the very least, a person’s anonymity being refused is a basis for that person to lie ethically about being on Facebook. Moreover, Facebook’s insistence that real names be used adds to the argument that the U.S. Constitution should be amended to include an explicit right to privacy. Much of the criticism of Roe v. Wade is actually that the justices “found” such a right being implicit in that constitution. While the problem regarding Facebook insisting that customers use their real names may have implications for U.S. constitutional law, we may simply have a matter here of bad business (i.e., bad customer service). In an economy kept competitive by anti-trust law, the emergence of a competitor with more respect for its customers could be anticipated. Ironically, having a government strong enough in resisting the lure of industry lobbyists that it enforces anti-trust law with sharp teeth can actually spread liberty by allowing for competition. Such liberty could be expected to leave its own imprint in the world in the midst of the pro-democracy protests in the Middle East. In fact, Facebook having respect for its potential and actual customers who prefer anonymity could send a message stronger than any from the protesters or human rights advocates in the Middle East—namely, “Look over here! Real freedom is possible!”
Click to add a Comment or Question (or View Posted Comments) on social media companies on democracy and protesters.

Source:
Jennifer Preston, “Ethical Quandary for Social Sites,” NYT, March 26, 2011. http://www.nytimes.com/2011/03/28/business/media/28social.html?_r=2&hp

0 comments:

 

blogger templates | Make Money Online