Wednesday, December 5, 2007

Are Facebook and Google Big Brother?

I read an interesting post at Google’s official blog concerning “free expression and controversial content on the web” which got me thinking about the role and responsibility of companies in terms of data exposure vs. respecting the individual’s right to privacy. Are companies who collect, disseminate and filter our digital content/personal information playing the role of Big Brother?


On the one hand, providers of digital content are under legal obligation to governments such as China and Germany to remove content which violates their policies but to what extent should companies such as Google and Facebook respect individual rights and freedoms when it comes to the collection and display of our data?


Censorship of content on the web has always been a sensitive issue. One of the fundamental tenets of the web is the right to publish freely. Many content providers such as Digg, MySpace and YouTube rely upon collective intelligence to surface or bury content. So should user generated content be more closely monitored?


Recently a video appeared of a Finnish boy (responsible for the school shooting) showing his guns and threatening to commit a massacre. After the shooting occurred, YouTube responded swiftly by removing the offensive material and offering their condolences to all the families affected by the tragedy. CNET article author Greg Sandoval concurs that “blaming YouTube in such a situation would be equivalent to holding the U.S. Postal Service responsible for delivering the messages sent by the Zodiac Killer.”

Google’s Rachel Whetstone makes a valid point when she says that offering a search index that is for the most part uncensored is akin to phone companies and ISP providers allowing unscreened phone conversations and emails to be transmitted freely. However, the main difference with the web vs. email/phone is that the target audience is potentially world wide. The ethics around censorship vs. freedom is a perennial issue but what are data collectors doing to protect our individual rights to privacy and what happens if the information we share falls into the wrong hands? Can Google’s powerful search index get users in trouble for saying the wrong thing?

I’m sure it can.

I recently encountered a situation where I had mentioned a client’s name in the context of an office move which resulted in the client requesting that I not blog about them. How did they find my innocuous post? Through Google Alerts, which deliver content based on a keyword search directly to a user’s inbox or iGoogle page. To their credit, Blogger (owned by Google) includes a feature that allows users to request that their blog content not be indexed by search engines.

Disclosure of personal information continues to be a hot topic and surfaced again recently with regards to the new Facebook Beacon alerts. On Nov. 6th Facebook introduced a new feature to publicly display users actions performed on external sites (such as Blockbuster movie rental, Epicurious recipe clipping, and Travelocity bookings), as mini feeds on a user’s Facebook page. Such a move sparked a wave of Facebook user protests.

According to a NYT article, more than 50,000 Facebook members signed a petition objecting to displaying such messages to their friends. The article cites the example of a user who saw a news feed informing her that her sister had purchased the Harry Potter game “Scene It”. This particular user is quoted saying “I don’t want to know what people are getting me for Christmas”. The signers of the MoveOn.org petition are pushing to be able to opt out of the Beacon program with one click. The users’ main complaint was that it is impossible to disable this feature and that the beacon, which pops up informing users on the 3rd party site that their information is about to be shared with Facebook, is too small and displayed for too short a time. In response to protests, Facebook management made the decision to make the beacon box appear larger on the screen, display for longer and changed the display rules to require users to click an affirmative ‘OK’ button for each beacon displayed. If users ignore the alert boxes, Facebook will not post a news feed but they will not at the time of writing be adding a universal opt-out choice.

Early Beacon Warnings



Later Modified Beacon Warnings





(source: NYT “The Evolution of Facebook’s Beacon” November 29, 2007)

Removing content across and within domains is a sensitive and topical issue, as much of our world becomes digital, searchable and archived. Should we be more cautious about what we publish or continue to share freely? What rights should the user have to request removal from search indices such as Google? Companies and organizations such as women’s shelters already exercise the right to remain under the radar. What about individual rights? Who owns our data anyway? Could photographs of you during college prevent you from getting your dream job upon graduation? And how about digital detective agents or reputation lookup sites springing up such as Rapleaf who display facts, figures and information about you that you didn’t even know or have forgotten existed. And what if they chose to expose this information (like Facebook did)? When the semantic web really takes off and more powerful ways of tagging, relating and distributing content exist, such issues will become even more pressing and will no doubt require more powerful technical solutions to solve them.

Perhaps Google (or any company who holds ‘our’ data) needs to include a “Remove this from your search engine” button in their search results experience.

1 comment:

Anonymous said...
This comment has been removed by a blog administrator.