Gonzalo E. Mon is a partner in the Advertising Law practice at Kelley Drye & Warren LLP and his co-author, John J. Heitmann, is a partner in the firm’s Telecommunications group. Read more on Kelley Drye’s advertising blog, Ad Law Access, or keep up with the group on Facebook or Twitter.

If you work with mobile apps, you may already know that privacy is a hot issue. Regulators are pushing companies to improve their privacy practices, Congress is contemplating new laws, and class action lawyers are suing companies that don’t clearly disclose their practices. In the past few weeks, this focus on privacy intensified as the FTC, the California Attorney General, and even the White House weighed in with new announcements.

Two things are clear from this recent burst of activity. First, regulators are putting pressure on everyone in the mobile app ecosystem to improve their practices, so you can’t just assume that it’s your partner’s responsibility to comply. And with the number of regulators focusing on these issues, it’s going to be a lot harder for companies to hide. No matter what role you play in the mobile app ecosystem, you should pay attention to these developments. Here’s what you need to know.


Increased Focus on App Privacy


In February, the FTC issued a report about mobile apps directed to children. Although these apps can collect a broad range of information, the FTC noted that neither the app stores nor app developers provide enough information for parents to determine what data is collected from their children or how it is used or shared. In some cases, this could be a violation of federal law. The FTC wants all members of the kids app ecosystem to play an active role in making appropriate disclosures to parents.

Shortly after the FTC issued its report, the California Attorney General announced an agreement with the leading app stores in which the stores agreed to add a field in the app submission process for developers to post their privacy notices or a link to a privacy policy. The agreement is intended to ensure that consumers have an opportunity to access pertinent privacy information before they download an app. Moreover, the app stores have committed to provide a mechanism for consumers to report apps that don’t comply with laws or the app store’s terms of service.

And the White House also stepped into the debate by announcing a data privacy framework that establishes a “Consumer Privacy Bill of Rights.” Although the framework speaks broadly about privacy issues, several sections discuss issues that are particularly relevant to the mobile space. For example, the White House encourages app developers to collect only as much personal data as they need and to tailor their privacy disclosures to mobile screens.


5 Tips to Stay Ahead of the Regulators


Given the quickly changing legal landscape — and the growing number of government institutions that want to play a role in that landscape — it can be difficult for companies in the mobile app space to understand what is required. The following five tips address concerns that all of these institutions appear to share. Accordingly, they should form the starting point for your legal analysis when you develop and launch an app.

1. Don’t collect more than you need.

Because data can function as the currency of the digital age, there is often a tendency to collect as much data as possible. Companies think that even if they don’t have an immediate use for the data now, they might find a use (or a buyer) for it later on. Although this may be true, resist the temptation to collect more data than you need for your app to work. This is a core principle of the FTC’s “privacy by design” framework, as well as the new White House framework.

2. Disclose your privacy practices.

You need to make sure that users easily have the ability to learn what information you are collecting from them and how you are using it before they download your app. (The changes the app stores are making as a result of their agreement with the California AG will make this easier.) Make sure that your privacy notices are easy to read and tailored to the mobile setting. If you’re looking for a place to start, consider the Mobile Marketing Association’s Privacy Policy Guidelines for Mobile Apps.

3. Be careful with children.

If you collect personal information from children under 13, you need to comply with the Children’s Online Privacy Protection Act. Among other things, COPPA generally requires companies to obtain verifiable consent from parents before they collect personal information from their children. The FTC has challenged app developers for violating COPPA, and the agency’s latest report suggests that the FTC expects all members of the kids app ecosystem to play a role in complying.

4. Consider when to get consent.

Although various bills pending in Congress would require companies to get consent before collecting certain types of information, outside of COPPA, getting consent is not a uniformly applicable legal requirement yet. Nevertheless, there are some types of information (such as location-based data) for which getting consent may be a good idea. Moreover, it may be advisable to get consent at the point of collection when sensitive personal data is in play. Work with your legal counsel to determine what makes sense in your context.

5. Protect the information you collect.

Unfortunately, it’s not uncommon to read stories about major companies who experience data breaches. Data breaches can be costly to address and they may result in lasting damage to your brand. If you are collecting information from consumers, you need to ensure you have physical, electronic, and procedural safeguards to protect that information. For example, certain data should be encrypted and you should limit access to it. Moreover, you should properly dispose of data when you no longer need it.

Image courtesy of iStockphoto, akinbostanci

More About: apps, contributor, data, features, law, Mobile, privacy

For more Dev & Design coverage:





Your favorite mobile apps should soon be making it a lot more clear when they intend to use your data.

The Attorney General of California, Kamala D. Harris, announced Wednesday a deal with Amazon, Apple, Google, Hewlett-Packard, Microsoft and Research in Motion; the companies agreed to strengthen privacy protection for users that download third-party apps to smartphones and tablet devices.

In the deal, the companies said they would require app developers to clearly spell out what data their apps can access and what the app or company does with that data. The deal also makes app store custodians such as Apple and Google, who run the App Store and Android Market, set up a way for users to report apps that don’t provide a clear-cut explanation of their privacy policies.

According to a statement from Attorney General Harris’ office, if an app developer doesn’t meet these new privacy-policy requirements, they could be charged with a crime under California law.

“California has a unique commitment to protecting the privacy of our residents,” said Harris. “Our constitution directly guarantees a right to privacy, and we will defend it.”

Android users are well aware that developers on the platform are required to ask them for permission before accessing their personal data, but they’re not told how or why their data is being accessed. Apple also doesn’t allow any software on its App Store that takes personal information without asking, but developers haven’t been transparent on that platform, either.

In fact, Harris’ office says, only five percent of all mobile apps offer a privacy policy. And developers across both platforms have come under fire recently for coding software that transmits users’ personal data unbeknownst to them.

That controversy managed to pique the interest of some members of Congress, who sent a letter of inquiry to Apple.

Should lawmakers intervene when the creators of popular platforms like Android and iOS may not be doing enough to protect the privacy of their users? Sound off in the comments below.

Image courtesy of iStockphoto, TommL

More About: amazon, android, apple, blackberry, Google, Hewlett-Packard, iOS, microsoft, privacy, research in motion, RIM, webOS, windows phone

For more Dev & Design coverage:





Path released a list of Valentine’s Day stats Wednesday, just one week after a privacy controversy prompted Path’s CEO to issue an apology.

In a press release, Path lists mentions for certain terms such as “romance,” “engaged,” and “single” on Valentine’s Day. The stats are somewhat interesting, however it seems ill-timed considering the hot water Path was in last week surrounding privacy issues with its app.

Developer Arun Thampi discovered that users unknowingly submit their entire address book to Path’s servers through the app, reporting the issue on his blog Feb. 8. Path CEO Dave Morin issued an apology on the company’s blog that same day, admitting Path made a mistake. He added that the information was only used to improve the “Add Friends” feature and alert users when a contact joined Path. Morin wrote that user’s address books were sent to servers “over an encrypted connection” and “stored securely on our servers using industry standard firewall technology.”

The way Path sourced this Valentine’s data is different from last week’s issue. With aggregate usage data, it analyzes all Path users as a whole without looking at any one individual.

Numerous social network companies are privy to information about popular terms and what you’re accessing on their sites without violating privacy policies. But this new batch of information could possibly keep concerns about what information companies such as Path can access front-and-center in user’s minds.

Last week’s Path problem was a result of the fact the app didn’t ask users’ permission before it accessed their information – a rule for all iOS apps. Permissions on Facebook’s new apps drew similar concern from some experts and generated criticism, because to access the app, the opt-in agreement is usually mandatory.

Here are the stats from the press release:

Path metrics for Valentine’s Day in the U.S. on February 14, 2012:

  • Comments with the word “love” on Path up 26%
  • Comments with the words “romantic” or “romance” up 153%
  • Comments using the words “marry,” “date” or “engage” up 54%
  • Negative comments about Valentine’s Day up 38%
  • Comments with the word “single,” up 53%
  • Percentages of moments tagged with other people up 33%
  • Love (hearts), 43.5%
  • Happy (smiley face), 25%
  • Laugh (laughing face), 18%
  • Surprise (surprised face), 8.6%
  • Sad (sad face), 4.9%

Top 10 songs people listened to on Valentine’s Day:

  • Someone like You by Adele
  • Young, Wild and Free (feat. Bruno Mars) by Wiz Khalifa
  • Set Fire to the Rain by Adele
  • We Found Love (feat. Calvin Harris) by Rihanna
  • A Thousand Years by Christina Perry
  • It Will Rain by Bruno Mars
  • The One That Got Away by Katy Perry
  • Paradise by Coldplay
  • I Will Always Love You by Whitney Houston
  • Take Care (feat. Rihanna) by Drake

Top 10 Artists listened to on Path:

  • Drake
  • Adele
  • Rihanna
  • Coldplay
  • Beyonce
  • Whitney Houston
  • Bruno Mars
  • Chris Brown
  • Katy Perry
  • Maroon 5

  • Average time users went to sleep, 12:11 a.m.

What information do you feel comfortable providing to social networking sites? Tell us in the comments.

Image courtesy of iStockphoto, oonal

More About: Path, privacy, valentine’s day

For more Dev & Design coverage:





John Fontana is the identity evangelist for Ping Identity and editor of the PingTalk Blog. Prior to joining Ping, he spent 11 years as a senior editor at Network World.

Google, Facebook, Yahoo and others all want to be your identity platform on the web. But while it’s certainly convenient to have one credential for multiple websites, many would argue these services are only secure enough to access your grandmother’s online recipe book.

Growing numbers of technologists, IT executives, organizations and governments believe an identity authentication model must establish set standards.

But can any set of standards answer the tough security challenges, and to what degree? Is it safe to check your social security account on a credential issued by Google? To access health records using your Facebook ID?

Not today. And tomorrow is not likely either.

SEE ALSO: Who Owns Your Identity on the Social Web?

However, OpenID Connect and OAuth 2.0 (open authentication) are pointing to some of the best and most promising standards of today. OAuth is the foundation for OpenID Connect (the basis for consumer ID) and for User Managed Access (UMA), a model that lets users control their personal data. Companies such as Bechtel, Chevron, Cisco, GE, M&T Bank, Salesforce.com, and others are already enjoying early success. OpenID Connect and OAuth 2.0 offer a place where consumer and corporate IDs can co-mingle in a secure cloud, protected by acceptable levels of security.

While it’s too early to tell if OpenID and OAuth will succeed, so far, they appear able to validate a user’s identity — perhaps even identities created by search engines and social sites.


“Street Identity” and Identity Attribute Data


Furthermore, big names are supporting the standards push. Google, Verizon, data exchange service ID/Webdata, and trust framework provider Open Identity Exchange (OIX) proposed a service called Street Identity at a conference last week. Street Identity is designed to strengthen authentication on the web. Loosely-coupled “providers” contribute user data called attributes, such as street address, age and/or mobile phone number that can be used to more accurately validate a user’s identity.

“Google’s [efforts] recognize what is happening now, which is identities are being deconstructed into attributes,” says Don Thibeau, chairman of OIX.

Ironically, Google and other companies with massive user data repositories don’t have enough validated pieces of user information to strengthen authentication. Google would need to partner with an attribute provider that would incorporate that information into the authentication process — with user consent, of course. The service would include a revenue model for businesses and organizations that agree to participate.

Google’s idea doesn’t replace the current identity standards effort. Rather, Street Identity is building on OpenID Connect and OAuth. It incorporates UMA for user control and features the first implementation of OpenID Connect’s spec for attribute aggregation and distribution, which was largely championed by Microsoft and its internal identity guru, Mike Jones.

Google and its partners believe that by aggregating a user’s data from various trusted sources, Street Identity can solve three problems: First, the service would connect to real-world identities, which OpenID does not do. It would provide a financial incentive for mobile operators that collect fees for providing data. Finally, it allows the government to steer clear of the electronic ID business by accessing needed data via attribute providers.

The prospect sounds promising, but so did pure PKI before its implementers began telling war stories. It seems, however, that Google continues to work toward a user authentication standard. The caveat is that standardization still has a lot more work ahead.

Image courtesy of Flickr, Darwin Bell

More About: consumer protection, contributor, features, Google, identity management, openID, privacy

For more Dev & Design coverage:





Apparently, Facebook has a lot of work to do on its privacy controls. In some cases, the new “frictionless sharing” features of Facebook can make it so that even when you’re logged out of Facebook, your browser is still tracking every page you visit, sending that data back to Facebook.

According to entrepreneur and self-described hacker Nik Cubrilovic, who shows the code involved with this alleged security issue on his website, “Even if you are logged out, Facebook still knows and can track every page you visit. The only solution is to delete every Facebook cookie in your browser, or to use a separate browser for Facebook interactions.”

Oddly enough, Cubrilovic says this data is not even hidden, adding that “You can test this for yourself using any browser with developer tools installed. It is all hidden in plain sight.”

SEE ALSO: Facebook Changes Again: Everything You Need To Know

Cubrilovic’s interest was piqued after he read a post by Dave Winer on Scripting News, pointing out the specter of Facebook announcing the websites you’re visiting and articles you’re reading without your explicit permission or knowledge. Such capabilities are written into Facebook’s new API, according to Winer. He says that Facebook scares him, writing, “I think there’s a good chance that by visiting a site you are now giving them access to lots more info about you. I could be mistaken about this.”

Winer’s post was a reaction to one written last week by ReadWriteWeb, pointing out that the new “social reader” apps Facebook plans to launch soon (and are now available if you enable your Facebook Timeline) will be able to display what you’re reading to your Facebook friends. However, we logged into one of those Facebook apps, The Guardian Social Reader, and noticed that it’s easy to opt out of these “features” when we first began using it.

Even though you can opt out of much of this sneaky kind of sharing, we’re thinking Facebook still has some work to do before everyone can feel perfectly secure with its apps and sharing capabilities. Perhaps it’s a matter of educating users about Facebook’s new capabilities. Meanwhile, it might be time for us to modify that old saying, “Don’t write anything that you wouldn’t want to have read in court.” For the time being, must we change that to “Don’t click on any website that you wouldn’t want to have revealed in court?”

Update: Facebook engineer Arturo Bejar responded to the following question I emailed to Facebook Sunday afternoon: “Will users be able to completely prevent their browsing data from being sent back to Facebook, or from displaying on their feeds?”:

“I am a Facebook engineer that works on these systems and I wanted to say that the logged out cookies are used for safety and protection including: identifying spammers and phishers, detecting when somebody unauthorized is trying to access your account, helping you get back into your account if you get hacked, disabling registration for a under-age users who try to re-register with a different birthdate, powering account security features such as 2nd factor login approvals and notification, and identifying shared computers to discourage the use of ‘keep me logged in”.

“Also please know that also when you’re logged in (or out) we don’t use our cookies to track you on social plugins to target ads or sell your information to third parties. I’ve heard from so many that what we do is to share or sell your data, and that is just not true. We use your logged in cookies to personalize (show you what your friends liked), to help maintain and improve what we do, or for safety and protection.”

You’re invited to respond to Arturo’s statement in the comments section below.


Photos: Facebook Timeline


The New Facebook Profile: Timeline

Timeline is a radical departure from previous versions of the Facebook user profile. The most prominent feature is the addition of a cover photo at the top of the page. Users can change this to whatever they’d like it to be.

1987

In 1987, my sister was born. Facebook knows these life events and includes them in your timeline.

Being Born

You can even add a picture and context to your birth, which starts the Timeline.

Timeline Interface

The Timeline is a two-column interface with top photos, status updates, friends and more.

Map

Facebook has added a feature that lets you see where you have visited. This is powered by Facebook Places.

Photos in the Timeline

Here’s how photos are displayed in the Timeline.

Friends in the New Timeline

Here’s what the Friends page looks like.

Changing Settings

Some of the new Timeline’s customization features.

2009

More of the new Timeline

Getting Married

You can add life events, such as getting married, to your profile through the Publisher Bar. You can also announce that you broke a bone, got a new job, etc.


More F8 Coverage:


More About: Facebook, Frictionless sharing, new features, privacy

For more Dev & Design coverage:





Christian Olsen is the head of Levick Strategic Communications’ social and digital media practice. Follow him on Twitter @cfolsendc.

Recently, online properties like Hulu, MSN and Flixster have been caught using a tougher version of the common cookie. These “supercookies” (aka “Flash cookies” and “zombie cookies”) serve the same purpose as regular cookies by tracking user preferences and browsing histories. Unlike their popular cousins, however, this breed is difficult to detect and subsequently remove. These cookies secretly collect user data beyond the limitations of common industry practice, and thus raise serious privacy concerns.

Supercookies are similar to the standard browser cookies most folks are familiar with, but are stored in different locations on a user’s machine, for example, in a file used by a plug-in (Flash is the most common). This makes them harder to find and delete, especially since a browser’s built-in cookie detection process won’t remove them either. Furthermore, some supercookies have additional capabilities, like regenerating regular cookies to prevent their removal by the user.

To make matters worse, removing master supercookies is much more difficult. It requires the user to dig through the file system and delete them manually, an inconvenient task even for advanced users. The novice, on the other hand, likely won’t even realize supercookies exist, let alone be able to find them.

SEE ALSO: 10 Travel Tips for Protecting Your Privacy

The kind of data supercookies track isn’t typical cookie material. A browser limits the typical cookie to be written, read and ultimately removed by the site that created it. The supercookie, on the other hand, operates outside of established safeguards. It can track and record user behavior across multiple sites. While it’s easy to understand that a site would want to track a user’s activity while she navigates its turf, it’s ethically questionable that site operators are able to record a user’s actions beyond site parameters.

In several cases, a company’s supercookie is the result its partnership with a digital marketing firm that places a high value on user behavior. In response to FTC pressure, the Internet ad and marketing industry responded by publishing “self-regulatory” policies, although it restricts itself from little else than a user’s medical records.

To the majority of the public, this type Internet tracking is outside of the bounds of acceptable conduct. While the “right to track” may be written into a terms of use or user agreement contract, it is often not fully disclosed or within the realm of industry standards, rendering its legal defense moot. Furthermore, tracking provokes a breach of trust between user and site — and consumers have historically exhibited intolerance to brand betrayal.

While many companies that had been challenged on their use of supercookies were quick to cease, some choose to continue the practice. Many web marketing firms, advertisers and less-than-scrupulous websites still refuse to follow industry best practices — they continue to practice knowingly. And many more sites don’t even realize they’re utilizing supercookies in the first place.

Whether it has decided to cease web tracking or not, the company at risk needs to beware of losing control of already collected data. A data breach would result in catastrophic — and perhaps incurable — brand distrust. A user’s discovery of a company’s surreptitious data collection and the subsequent vulnerability of that data could easily spell the end of a brand’s reputation.

Companies that care about reputation and user trust should audit their sites and properties to ensure that data collection and the use of supercookies parallel user expectations. This analysis applies to the site, its advertisers and any third party tools or plug-ins. Companies need to ensure that all data collection has been thoroughly disclosed in order to avoid legal liability.

Companies should not wait for a problem to arise before initiating a comprehensive data security overview. A regular screening of all user data and its safeguards is good practice. The cost a company suffers for securing its data and customer trust is small compared to the business and public relations fallouts that would result from a security breach.

A successful company will always make a comprehensive attempt at transparency by handling data responsibly. The use of data tracking tools like supercookies does not rank highly in consumer acceptance, whether its application is technically “legal” or not. Regardless of the manner in which information is collected, know that negligent data handling will not be excused by claims that a company was in the dark about its collection practices. In the eyes of the consumer, the more data collected, the more of an obligation that company has to keep it safe.

Images courtesy of Flickr, ssoosay, Jeremy Brooks

More About: Business, cookies, data collection, privacy, trending




A scientist at the network security company Arbor Networks has used data from 80 Internet service providers around the world to create an image of the Internet block in Egypt.

The graphic, which was compiled using anonymous traffic engineering statistics, shows traffic to and from Egypt dropping sharply around 5:20 p.m. ET. As of about three hours ago, traffic has not picked back up.

Craig Labovitz, the creator of the graphic and chief scientist at Arbor Networks, says that he found no evidence of Internet disruption in Syria, debunking a report from Al Arabiya earlier Friday that suggested all service in Syria had been cut off.

More About: censorship, Egypt, infographic, Internet Out, trending




Just when we thought the holiday season was over, we came across another reason to celebrate: Data Privacy Day 2011 is today.

The international event is the fourth annual “celebration of the dignity of the individual expressed through personal information” put on by the non-profit organization The Privacy Projects. Perhaps in an attempt to remove itself from the list of privacy advocates’ favorite targets, Google is one of the organization’s sponsors.

With online tracking increasingly creeping out consumers and Facebook adding to an ever-growing history of privacy glitches, Internet users have indicated that increased privacy is indeed worth celebrating.

Respondents to a recent survey by Opera Software indicated that consumers in the U.S., Japan and Russia are more worried about Internet privacy than they are about terrorist attacks, being attacked in their homes or going bankrupt.

Yet despite these fears, many people still don’t delete their browser settings or use safe passwords, according to the same survey.

“It is interesting to note the gap between what people say concerns them online and what they do in practice to protect themselves,” says Christen Krogh, the chief development officer of Opera Software. “We often see that it is human nature to fear traffic accidents but not wear a seatbelt or helmet, or dread bankruptcy but continue spending, and it very much seems like it is the same for online safety behavior.”

One of the easiest ways to avoid signing up to share more information than you’re comfortable with is to read sites’ privacy policies. Just in case you haven’t exactly made this a habit (we, of course, scour every one), designer Calvin Pappas has waded through the top 1,000 sites’ policies in order to tell you what you’re missing.

Image courtesy of iStockphoto, alengo

More About: behavioral advertising, data privacy day, facebook, Google, Online Tracking, privacy




Google and Mozilla have both announced new browser initiatives that will allow users to opt out of having their activities tracked by online advertisers. These developments are at least partially in response to the “Do Not Track” lists proposed by the U.S. Federal Trade Commission.

In December, the FTC released a 122-page report [PDF] outlining the concept, which has been called a “Do Not Call” list for online behavioral advertising. Rather than make calls for legislation, the FTC has pushed for browser makers and advertisers to self-regulate.

Although targeting the same problem, Mozilla and Google are are approaching opt-out online behavioral advertising from different directions.


Firefox: Do Not Track HTTP Header


On Sunday, Mozilla formally announced its plans to build a do-not-track feature into future versions of Firefox. Alex Fowler, the global privacy and public policy leader at Mozilla, explained the proposed feature on his blog:

“When the feature is enabled and users turn it on, web sites will be told by Firefox that a user would like to opt out of OBA. We believe the header-based approach has the potential to be better for the web in the long run because it is a clearer and more universal opt-out mechanism than cookies or blacklists.”

Mozilla’s Sid Stamm has written his thoughts on the proposal and he explains why the HTTP header approach was chosen fro Firefox:

“Currently, to opt out of online behavioral advertisements, you have to get a site to set an opt-out cookie so they won’t track you. There are various web sites that help out (NAI, IAB UK) and there are Firefox add-ons (TACO, beef taco, etc.) that can streamline this process. But this is a bit of a hack; it’s nearly impossible to maintain a list of all the sites whose tracking people may want to opt-out from. It would be more attractive if there was one universal opt-out signal that would tell all sites you want to opt out.”

Instead, Stamm proposes the use of a HTTP header that is transmitted with every HTTP request and that lets ad networks know a user does not want to bee tracked.

This approach of using a Do-Not-Track HTTP header differs from some other opt-out online behavioral advertising solutions, which utilize either opt-out cookies or an opt-out registry. Michael Hanson from Mozilla Labs has posted a technical analysis of Mozilla’s proposal on his blog.

One advantage of using a header and not a cookie to carry opt-out information is that even if user clears his or her browser cache, the opt-out settings will still remain in place.

As The Wall Street Journal points out, however, for Mozilla’s tool to work, “tracking companies would need to agree to not monitor users who enable the do-not-track feature.” As of this writing, no companies have publicly agreed to participate. Mozilla will have to convince advertisers to comply with its header proposal for this idea to actually gain traction.


The Google Approach


Meanwhile, Google has released a new extension for Google Chrome called Keep My Opt-Outs. The Google Code page for Keep My Opt-Outs describes the extension as a way to “permanently [opt] your browser out of online ad personalization via cookies.”

The extension works with Google-served ads as well as with ads from companies that have signed up with AboutAds.info.


Other Initiatives


Last month, Microsoft announced that IE 9 will include a way for users to create lists of sites or companies that are blocked from tracking their data. This is significant because of reports that Microsoft previously removed similar features from Internet Explorer 8 at the behest of online advertisers.

The features and plugins proposed by Google, Mozilla, Microsoft and others are a good start in making it easier for users to opt-out of online behavioral ads; however, these solutions will only work if advertisers and browser makers can work together in a cohesive way.

Photo courtesy of swanksalot

More About: advertising, Browsers, chrome, do not track list, Firefox, FTC, Google, IE9, microsoft, mozilla, privacy, trending