Google wants you to help find security flaws in its browser, Chrome — and the search giant is paying a handsome reward.

The company told attendees at the CanSecWest security conference in Vancouver next month they can get up to $1 million in cash and Chromebooks in exchange for revealing the flaws.

“The aim of our sponsorship is simple: we have a big learning opportunity when we receive full end-to-end exploits. Not only can we fix the bugs, but by studying the vulnerability and exploit techniques we can enhance our mitigations, automated testing, and sandboxing. This enables us to better protect our users,” the Google Chrome security team wrote in a blog post.

The prizes include the following categories, and multiple rewards can be issued per category:

$60,000 – “Full Chrome exploit”: Chrome / Win7 local OS user account persistence using only bugs in Chrome itself.

$40,000 – “Partial Chrome exploit”: Chrome / Win7 local OS user account persistence using at least one bug in Chrome itself, plus other bugs. For example, a WebKit bug combined with a Windows sandbox bug.

$20,000 – “Consolation reward, Flash / Windows / other”: Chrome / Win7 local OS user account persistence that does not use bugs in Chrome. For example, bugs in one or more of Flash, Windows or a driver. These exploits are not specific to Chrome and will be a threat to users of any web browser. Although not specifically Chrome’s issue, we’ve decided to offer consolation prizes because these findings still help us toward our mission of making the entire web safer.

Check out the video above to learn more.

Thumbnail image courtesy of iStockphoto, alija

More About: Google, google chrome, security

For more Dev & Design coverage:





Mashable OP-ED: This post reflects the opinions of the author and not necessarily those of Mashable as a publication.

Jon Barocas is the founder and CEO of bieMEDIA, a Denver-based online marketing and media solutions company that specializes in video content production and distribution, mobile visual search, technology platforms, SEO, VSEO and more.

Like most technology fans, I am always ready and willing to try any technology that promises to simplify my life. QR codes seemed to present an accessible and uniform way for people with smart devices to interact with advertising, marketing and media. Those little squares of code seemed to open a world of opportunity and potential. But after using them for a length of time, I shifted my perspective.

My initial honeymoon with QR codes was very short-lived. The initial rush that I had received from trying to frame the code on my device had lost its luster. I started to view QR codes as a barrier to additional information. And in many instances, the rewards (whatever I received as a result of scanning the code) did not measure up to the effort of the transaction itself.

Consider a recent study by comScore, which states that only 14 million American mobile device users have have interacted with a QR code. In essence, less than 5% of the American public has scanned a QR code. So where’s the disconnect?

Inadequate technology, lack of education and a perceived dearth of value from QR codes are just three of the reasons mobile barcodes are not clicking with Americans. But it goes deeper than that.

Humans are visual animals. We have visceral reactions to images that a QR code can never evoke; what we see is directly linked to our moods, our purchasing habits and our behaviors. It makes sense, then, that a more visual alternative to QR codes would not only be preferable to consumers, but would most likely stimulate more positive responses to their presence.


The QR Alternative


Enter mobile visual search (MVS). With MVS, you simply point at a product or logo and shoot a picture with your smartphone’s built-in camera. Within seconds, the MVS application will provide product or company information, or even the option to make a purchase right then and there on your mobile device.

MVS is a far more compelling and interactive tool to enable mobile marketing and commerce. In today’s increasingly mobile world, instant gratification is the norm, and taking the extra step of finding a QR code scanner on your mobile device no longer makes sense. With MVS, you are interacting with images that are familiar and desirable, not a square of code that elicits no reaction.

The opportunities are boundless with MVS. Unlike two-dimensional barcodes and QR codes, MVS will have wrap-around and three-dimensional recognition capabilities. Even traditional advertising will be revitalized with MVS. For example, picture an interactive print campaign that incorporates MVS as part of a competition or game. Marketers can offer instant gratification in the form of videos, mobile links, coupons or discounts as incentive for taking the best pictures of a particular product or logo.

The world has already started to migrate to MVS. For example, companies in Argentina and South Korea currently allow commuters waiting for subways or buses to view images of groceries or office supplies. Embedded within these images are recognitions triggers: Smartphone users place and pay for an order to be delivered or picked up within minutes. 

Also, MVS can cash in on word-of-mouth marketing. Marketers will seamlessly link their campaigns to social networks so consumers can share photos and rewards, such as vouchers, coupons or music downloads, with their friends and followers.


QR Code Security Risks


In addition to being a more versatile medium, mobile visual search is also more secure than QR code technology. Cybercriminals are able to cloak smartphone QR code attacks due to the nature of the technology — QR codes’ entire purpose is to store data within the code. There is no way to know where that code is going to take you: a legitimate website, infected site, malicious app or a phishing site. MVS’s encryption modality will eliminate the opportunity for malicious code to download to your smartphone.

Recently, there have been documented cases of QR code misuse and abuse around the globe. For instance, infected QR codes can download an app that embeds a hidden SMS texting charge in your monthly cellphone bill. QR codes can also be used to gain full access to a smartphone — Internet access, camera, GPS, read/write local storage and contact data. All of the data from a smartphone can be downloaded and stolen, putting the user at risk for identity theft — without the user noticing.

Mobile visual search is a safer and more secure technology that can provide more information and content than a QR code, without as many security risks. By focusing on real-world objects and images rather than code, MVS lessens the risk of a virus or Trojan attack.

Safety, security and versatility — there are many reasons that MVS will supplant QR codes. However, there is one important, largely overlooked reason to favor MVS over QR codes: For the first time, we will be able connect with our actual surroundings in a truly interactive way. We will be able to provide a virtual marketplace that is familiar and accessible. Humanizing this interaction and making it more visual are the foundations of MVS’s imminent success.

Image courtesy of iStockphoto, youngvet

More About: contributor, design, features, mobile apps, Opinion, QR Codes, security, trending

For more Dev & Design coverage:



Don’t give up your wallet and plastic cards just yet — at least, not until Google Wallet gets a security update.

The Android-only service, which lets you pay with your smartphone, turns out to have a major security flaw. If someone gets hold of your phone, they can effectively hit the reset button on Google Wallet — and get themselves sent a new PIN number.

The flaw, uncovered by TheSmartphoneChamp.com, wasn’t the first vulnerability uncovered in Google Wallet this week. Zvelo, a malicious software detection service, found that Google Wallet could be hacked and the owner’s pin number obtained using an app. But that hack required a phone to be rooted.

The video below shows just how easy it is to access credit card information from Google Wallet. One major concern: Google Wallet is connected to your phone, not your Google account, so you can’t change your password online if your phone is lost or stolen.

Google said a fix would be available soon. ”We strongly encourage anyone who loses or wants to sell their phone to call Google Wallet support toll-free at 855-492-5538 to disable the prepaid card,” said a spokesperson.

“We are currently working on an automated fix as well that will be available soon. We also advise all Wallet users to set up a screen lock as an additional layer of protection for their phone.”

The Google Wallet app was introduced in May 2011 and went live in September. It’s marketed as a paper-free way to store credit cards and pay for items with a tap on a PayPass pad using NFC technology. Shortly after its release, security concerns prompted Verizon to block the app from its Galaxy Nexus smartphone.

AT&T didn’t allow Google Wallet until recently. As Zvelo pointed out, that could have been due to the fact that AT&T, T-Mobile and Verizon had a network joint venture in ISIS — a direct competitor to Google Wallet.

By 2015, the value of all mobile money transactions is expected to reach $670 billion. Other companies, such as PayPal and Visa, have invested in their own mobile wallet technologies.

The Google Wallet website FAQ’s section says information stored on the app is protected by a chip called the Secure Element that operates separately from the phone’s main operating system.


Do you use Google Wallet? Are you concerned about someone stealing your information? Tell us in the comments.

Image courtesy of iStockphoto, oonal

More About: Google, google wallet, hack, mobile security, Secure Element, security

For more Dev & Design coverage:





Bouncer scanning software, developed by Google, is designed to search the Android market for software that could be malicious, the company announced Thursday on its blog.

With the success of Android this year, the company says it wants to protect its many users and their devices from harm.

“Device activations grew 250% year-on-year, and the total number of app downloads from Android Market topped 11 billion,” Hiroshi Lockheimer, VP of engineering, wrote on the Google Mobile Blog. “As the platform continues to grow, we’re focused on bringing you the best new features and innovations — including in security.”

Bouncer will scan current and new applications, plus developer accounts. The blog post explained how the service will function.

“Here’s how it works: once an application is uploaded, the service immediately starts analyzing it for known malware, spyware and trojans. It also looks for behaviors that indicate an application might be misbehaving, and compares it against previously analyzed apps to detect possible red flags. We actually run every application on Google’s cloud infrastructure and simulate how it will run on an Android device to look for hidden, malicious behavior. We also analyze new developer accounts to help prevent malicious and repeat-offending developers from coming back.”

Bouncer was tested in 2011 and comparing the first half of the year to the second, Google Mobile reported a 40% decrease in malicious downloads.

Google says from the beginning, Android was designed with security in mind. And, although a company can’t prevent malware, it can control the amount of damage those threats can cause with a dynamic security plan.

    Some of Android’s core security features are:

  • Sandboxing: The Android platform uses a technique called “sandboxing” to put virtual walls between applications and other software on the device. So, if you download a malicious application, it can’t access data on other parts of your phone and its potential harm is drastically limited.
  • Permissions: Android provides a permission system to help you understand the capabilities of the apps you install, and manage your own preferences. That way, if you see a game unnecessarily requests permission to send SMS, for example, you don’t need to install it.
  • Malware removal: Android is designed to prevent malware from modifying the platform or hiding from you, so it can be easily removed if your device is affected. Android Market also has the capability of remotely removing malware from your phone or tablet, if required.

Google’s long been fine-tuning its security features for its various products. Although in the past Google’s products have clashed with that of other mobile service providers due to security concerns.

Are you an Android user? What do you think about Bouncer? Tell us in the comments.

Image courtesy of iStockphoto

More About: android, Google, Mobile, security, trending

For more Dev & Design coverage:





anonymous image

Symantec’s pcAnywhere software could very well turn into “virusAnywhere” due to a potential security breach made by Anonymous.

Symantec, the anti-virus software company, warned users of pcAnywhere, a tool that allows for remote access to your computer, to disable the software. Symantec revealed in a white paper that Anonymous stole pcAnywhere’s source code in 2006 and could use that information to create vulnerabilities:

Upon investigation of the claims made by Anonymous regarding source code disclosure, Symantec believes that the disclosure was the result of a theft of source code that occurred in 2006.

The company is working on a set of updates and patches to fix the vulnerability issue even though Anonymous — as far as we know — hasn’t capitalized on it yet. The source code could let malicious users build exploits and attacks targeted at pcAnywhere users to reveal session information, PC Mag reported.

This is not the first time a Symantec product has been compromised, PC Mag pointed out:

In early January, Symantec confirmed that source code used in its older enterprise antivirus products was stolen. Hacker group the “Lords of Dharmaraja” of India had threatened to publish the code online. Although the code dated back to 1999, security expert Alex Horan of CORE Security Technologies said there was still potential for harm.

For users that insist on accessing pcAnywhere, Symantec recommends having the latest version of the software installed to prevent as much damage as possible.

Anonymous is proving to be an international force, not only attacking sites for fun but acting like a kind of digital watch dog. When Megaupload was shut down amid the SOPA and PIPA controversies, alleged members of Anonymous went after SOPA supporters and even the State Department website. Members of Anonymous had previously gone after banks and big business during the financial crisis and even targeted child porn sites. It’s unclear how and why Anonymous would use Symantec’s pcAnywhere source code but hopefully it would be for good and not ill.

What do you think of Anonymous going after Symantec’s source code? Are you a pcAnywhere user? What will you do? Sound off in the comments.


Want to learn more about Anonymous? Check out the video below.


Image courtesy of Flickr, anonymous, hacker, hacking, security, virus

For more Dev & Design coverage:





A nasty security bug in Skype‘s iOS app can lead to users’ personal information being stolen.

The cross-site scripting (XSS) vulnerability, demonstrated in the video below, is present in Skype 3.0.1 and earlier versions of Skype’s iOS app.

It lets an attacker create malicious JavaScript code that runs when the user views a text message in Skype’s chat window. The code can be used to access any file that the Skype app itself has access to, including the address book on your iPhone.

The technical explanation of the bug can be found here.

Skype is aware of the issue and is working on a fix. “We are working hard to fix this reported issue in our next planned release, which we hope to roll out imminently,” Skype said in a statement.

[via Superevr]

More About: hack, hackers, security, Skype, vulnerability

For more Dev & Design coverage:





Christian Olsen is the head of Levick Strategic Communications’ social and digital media practice. Follow him on Twitter @cfolsendc.

Recently, online properties like Hulu, MSN and Flixster have been caught using a tougher version of the common cookie. These “supercookies” (aka “Flash cookies” and “zombie cookies”) serve the same purpose as regular cookies by tracking user preferences and browsing histories. Unlike their popular cousins, however, this breed is difficult to detect and subsequently remove. These cookies secretly collect user data beyond the limitations of common industry practice, and thus raise serious privacy concerns.

Supercookies are similar to the standard browser cookies most folks are familiar with, but are stored in different locations on a user’s machine, for example, in a file used by a plug-in (Flash is the most common). This makes them harder to find and delete, especially since a browser’s built-in cookie detection process won’t remove them either. Furthermore, some supercookies have additional capabilities, like regenerating regular cookies to prevent their removal by the user.

To make matters worse, removing master supercookies is much more difficult. It requires the user to dig through the file system and delete them manually, an inconvenient task even for advanced users. The novice, on the other hand, likely won’t even realize supercookies exist, let alone be able to find them.

SEE ALSO: 10 Travel Tips for Protecting Your Privacy

The kind of data supercookies track isn’t typical cookie material. A browser limits the typical cookie to be written, read and ultimately removed by the site that created it. The supercookie, on the other hand, operates outside of established safeguards. It can track and record user behavior across multiple sites. While it’s easy to understand that a site would want to track a user’s activity while she navigates its turf, it’s ethically questionable that site operators are able to record a user’s actions beyond site parameters.

In several cases, a company’s supercookie is the result its partnership with a digital marketing firm that places a high value on user behavior. In response to FTC pressure, the Internet ad and marketing industry responded by publishing “self-regulatory” policies, although it restricts itself from little else than a user’s medical records.

To the majority of the public, this type Internet tracking is outside of the bounds of acceptable conduct. While the “right to track” may be written into a terms of use or user agreement contract, it is often not fully disclosed or within the realm of industry standards, rendering its legal defense moot. Furthermore, tracking provokes a breach of trust between user and site — and consumers have historically exhibited intolerance to brand betrayal.

While many companies that had been challenged on their use of supercookies were quick to cease, some choose to continue the practice. Many web marketing firms, advertisers and less-than-scrupulous websites still refuse to follow industry best practices — they continue to practice knowingly. And many more sites don’t even realize they’re utilizing supercookies in the first place.

Whether it has decided to cease web tracking or not, the company at risk needs to beware of losing control of already collected data. A data breach would result in catastrophic — and perhaps incurable — brand distrust. A user’s discovery of a company’s surreptitious data collection and the subsequent vulnerability of that data could easily spell the end of a brand’s reputation.

Companies that care about reputation and user trust should audit their sites and properties to ensure that data collection and the use of supercookies parallel user expectations. This analysis applies to the site, its advertisers and any third party tools or plug-ins. Companies need to ensure that all data collection has been thoroughly disclosed in order to avoid legal liability.

Companies should not wait for a problem to arise before initiating a comprehensive data security overview. A regular screening of all user data and its safeguards is good practice. The cost a company suffers for securing its data and customer trust is small compared to the business and public relations fallouts that would result from a security breach.

A successful company will always make a comprehensive attempt at transparency by handling data responsibly. The use of data tracking tools like supercookies does not rank highly in consumer acceptance, whether its application is technically “legal” or not. Regardless of the manner in which information is collected, know that negligent data handling will not be excused by claims that a company was in the dark about its collection practices. In the eyes of the consumer, the more data collected, the more of an obligation that company has to keep it safe.

Images courtesy of Flickr, ssoosay, Jeremy Brooks

More About: Business, cookies, data collection, privacy, trending

The Web Development Series is supported by Rackspace, the better way to do hosting. Learn more about Rackspace’s hosting solutions here.

Everyone loves a bad-guy-gone-good story, and these black hat hackers who went from lives of crime to corporate nine-to-fives epitomize that genre.

Let’s first make an important distinction: Hackers are not criminals. In fact, “hacker” is a term of high praise in the developer community. But when a hacker is dubbed a “black hat,” it means he or she has broken laws in the pursuit of hacking — perhaps even that he or she has done so for personal gain.

However, many black hat hackers have gone legit in their more mature years. While it’s not uncommon to see former cybercriminals switching teams to work as IT security consultants, many of the more high-profile black hat hackers also find themselves writing books, doing journalism and even getting public speaking gigs in the cybersecurity world.

So with that understanding, let’s turn our gaze upon these seven fascinating personalities who once hacked indiscriminately and are now employed respectably — some of them even by the companies they once hacked.

Ashley Towns

Towns created the first-ever iPhone worm, a rickrolling bit of code that only affected jailbroken iPhones. Mere weeks after the worm started spreading, Towns was hired by mogeneration, a company that develops iPhone apps, mostly for other clients such as TrueLocal, FoodWatch and Xumii.

Call of Duty Hacker

A 14-year-old Dublin schoolboy hacked into the Microsoft Xbox system this spring. In stark contrast to how Sony handled PlayStation hackers like geohot, Microsoft decided to work with the kid instead. The company hopes to teach the indubitably talented hacker to “use his skills for legitimate purposes.”

Christopher Tarnovsky

Hardware hacker Christopher Tarnovsky began his journey repairing satellites for the U.S. Army. He started dabbling in illegal hacking in the late 1990s. However, he didn’t get into serious legal trouble until he was hired by Rupert Murdoch’s News Corp. to hack a rival company’s satellite TV chip. These days, Tarnovsky runs a hardware security firm and sticks to gray hat hacking, like proving Infineon’s “unhackable” chip was anything but in 2010.

Jeff Moss

Moss is the founder of the Black Hat and DEF CON computer hacker conferences. In the days before the Internet was a big thing, he ran BBSes for hacking and phreaking and provided a hub for a huge, underground network of hackers of all stripes, from the curious to the criminal. In 2009, he was was sworn into the U.S. Homeland Security Advisory Council. And in April 2011, Moss was named chief security officer for ICANN, the agency that oversees the Internet’s domain names.

Michael Mooney

Mooney is best known for creating the Twitter bug Mikeyy, a worm designed to showcase Twitter’s security vulnerabilities. While the exploit was more gray than black hat, the worm could have gotten Mooney into serious legal trouble. However, Twitter didn’t press charges, and the 17-year-old Mooney was offered jobs by two software development firms. The teen accepted a position at web app shop exqSoft Solutions.

Owen Thor Walker

Also known as “akill,” Walker was charged as (and admitted to) being the ringleader of an international hacking group that caused nearly $26 million of damage. In 2008 he was hired by TelstraClear, the New Zealand subsidiary of Australian telecommunications company Telstra, to work with its security division, DMZGlobal.

Robert Tappan Morris

Morris is best known for creating the first Internet worm, the Morris Worm, in 1988. Later, he co-founded an online store, Viaweb, with Paul Graham, who would later found startup incubator Y Combinator. Viaweb was one of the first web-based computer applications. Now, Morris teaches computer science at MIT.


Series Supported by Rackspace


rackspace

The Web Development Series is supported by Rackspace, the better way to do hosting. No more worrying about web hosting uptime. No more spending your time, energy and resources trying to stay on top of things like patching, updating, monitoring, backing up data and the like. Learn why.


More Dev & Design Resources from Mashable:


How the WordPress SEO Plugin Can Help Your Blog [INTERVIEW]
Closed or Open Source: Which CMS is Right for Your Business?
A Look Back at Eight Years of WordPress
HOW TO: Get Started with the Less Framework
4 Free Ways to Learn to Code Online

image credits: iStockphoto, airportrait, Flickr/pikturz, Wikipedia, Wired, Flickr/ICANN

More About: black hat, career, developers, hackers, jobs, web development series

For more Dev & Design coverage:

The Tor Project has been recognized by the Free Software Foundation for its role in the protests and revolutions around North Africa and the Middle East.

This software, which allows for safe and anonymous web browsing, was given the FSF’s Award for Projects of Social Benefit. The award is intended to highlight “a project that intentionally and significantly benefits society through collaboration to accomplish an important social task.”

Without question, enabling the Internet’s role in political revolution has been an important social task, and one that the Tor Project has explicitly supported. In its section on activist users, Tor reps state that anonymous browsing is essential for reporting abuses of power and organizing protests, especially from behind government-sponsored firewalls and ISP blocks.

“Using free software,” the FSF writes, “Tor has enabled roughly 36 million people around the world to experience freedom of access and expression on the Internet while keeping them in control of their privacy and anonymity. Its network has proved pivotal in dissident movements in both Iran and more recently Egypt.”

In Iran, political dissent before, during and after the 2009 election caused a firestorm on Twitter and Facebook; as a result, the government began censoring many apps and sites. The Tor Project allowed users to bypass the blocks and access the web apps they needed to continue to organize.

And in Egypt and other countries in North Africa and the Middle East, a couple months of steady political unrest has been punctuated by periods of site-specific blocks and even total Internet blackouts. Once again, Tor was instrumental for continuing to allow many users to access the web, where they communicated internally and externally and rallied for change.

Andrew Lewman, executive director of the Tor Project, was present to accept the award from the FSF and its founder and president Richard M. Stallman during a March 19 ceremony.

Previous winners of this award include such notable FOSS projects as the Internet Archive, Creative Commons and Wikipedia.

More About: award, Egypt, foss, free software, middle east, politics, tor

For more Dev & Design coverage:

It started with a tweet Saturday morning, sounding an alarm of a security breach in the popular microblogging platform Tumblr. “OMG… The Tumbeasts are spitting out passwords!,” it warned.

That tweet spread like wildfire, notifying the world of a coding error that opened a security hole with the potential of revealing users’ passwords, server IP addresses, API keys and personal information.

Fortunately, Tumblr reacted, fixing the problem and then issuing this official message about 5 to 6 hours after the flaw was discovered:

“A human error caused some sensitive server configuration information to be exposed this morning. Our technicians took immediate measures to protect from any issues that may come as a result.

We’re triple checking everything and bringing in outside auditors to confirm, but we have no reason to believe that anything was compromised. We’re certain that none of your personal information (passwords, etc.) was exposed, and your blog is backed up and safe as always. This was an embarrassing error, but something we were prepared for.

The fact that this occurred at all is still unacceptable, and we’ll be seriously evaluating and adjusting our processes to ensure an error like this can never happen again.

Please let us know if you have absolutely any questions.”

What caused the error? That’s still under intense discussion at The Hacker News and elsewhere in the hacker community, but many think the culprit was a errant piece of PHP code. Obviously, spelling counts.

Let us know in the comments if you think those who discovered the security flaw were more eager to broadcast its existence than notify the Tumbler coders who might have been in a position to quickly fix it.

More About: Breach, flaw, php, security, tumblr

For more Dev & Design coverage: