Moving on…

Posted: May 27, 2011 in Mozilla

The past (almost) 2 years have been a great ride and I will be forever astonished by the talented people at Mozilla and in the community. With that said, it is hard to leave a place like this, but for me that time has come. June 3rd will be my last day here at Mozilla and I will be starting on new challenges June 6th.

Leaving Mozilla is one of the hardest decisions I have made and is even harder to leave such a great team, company, and mission behind. The infrastructure security group wasn’t here when I started and I’m proud to say that it is now starting to put its feet down and establish itself as a “security enabler.” Keeping an inherently open environment secure isn’t without its own challenges, but it has been an exciting process. The team that I am leaving behind is nothing short of top notch. They have some industry leading plans to move infrasec forward for Mozilla and the community.

Thank you to all the people in the community who I have had a chance to work with over the past 2 years. You have made working at Mozilla an even more enjoyable experience than I ever would have imagined. I will still be around on irc and won’t be disappearing completely any time soon. I will still be blogging about security and will also be hacking on various things. I look forward to more interactions and learning from you, so keep up the good work.

Back in January, I was having a causal conversation about passwords at a local gathering about security and was asked what we use for storing the passwords. I stated that we are using sha-512 w/ per user salts but we are looking at moving away from this standard to something much stronger. The response that I received from this person was pretty much in line with other comments I have received and seen on some of our forums. The two most common responses are: “Oh good, you are using per user salts” and “yeah, using sha-512 is much better than md5.” Granted, these comments are true, using sha-512 is better than using md5 and better than not using per user salts but there is still a weakness that I feel is overlooked.

Per user salts do provide value, but the problem is that they are typically stored with the hash. So the entry in the database looks something like this:

sha512${salt}${hash}

Or perhaps the hash is stored in a separate column or table and is grabbed as needed during password verification. Either way, both the hash and salt are stored in the same database.

In the event the hash was disclosed or the database was compromised, the attacker will already have one of the two values (i.e. the salt), used to construct the hash. All the attacker needs to do now is figure out the password entered into the hash formula and the order in which it was used. Order being:

hash = hash_operation({password} + {salt}) or;
hash = hash_operation({salt} + {password})

The two issues with this are: 1) since our source code is public, the order in which we salt is also public and; 2) due to our scale and usability, we store the hash and salt in the same place.

What am I getting at? The real problem is that we shouldn’t be solely replying on hashing algorithms to secure this data. Once the salt is known, it would be pretty trivial to dictionary a bunch of hashes in little or no time and get a pretty significant hit rate. (More to follow on this subject to follow in another post)

What is the solution? Right now the solution is moving away from sha-512 w/ per user salts to something like bcrypt but with a twist. The twist is adding in a layer of defense plus adding some controls over who can unlock the password equation. In pseudo code, this is what I mean:

{hmac_result} = hmac_create_operation( user_password + system_shared_password )
{bcrypt_result} = bcrypt_create_operation(( hmac_result + salt ) bcrypt iterations)

The bcrypt result would be stored in the database like this:

bcrypt$bcrypt_result$shared_key_version

The system_shared_password (aka nonce) is not stored within the database. Instead this data is stored on the operating system within a protected file. Using this configuration a SQL injection vulnerability that provides access to the password hash data would not provide access to the system_shared_password. Only under the scenario of a full system compromise would both secrets (password hashes, system_shared_password) be compromised.

We feel this solution gives us better controls around who can unlock the hashes and provides a layer of defense around the hashes. We also have put some code together thanks to fwenzel:

Why use per user salts at all? Two reasons, first per user salts ensure that two users with the same password will not have the same hash. Second, the use of salts prevents an attacker from using a precomputed rainbow table against the hashes.

More to come on this subject, as our goal is to increase security and the time in which it would take in order to brute or dictionary the hash. Our goal is and always to provide better protection around authentication systems.

Chris Lyon
Director of Infrastructure Security
a.k.a the hash buster

x-posted to: https://blog.mozilla.com/webappsec/2011/05/10/sha-512-w-per-user-salts-is-not-enough/

So after my blog post, I have received many questions about passwords, how many to use, and what is appropriate. Based on the questions, there are many people who use the same password for everything.

First off, don’t use the same password for everything. Using just one password for every site is a big risk. Having groups of passwords based on the content of the site is a good idea, but even better, is to have a unique password per site. If you are not willing to do so, you should have at least groups of passwords. I am not going to preach about password rotation, password length, and password strength yet (maybe a future post) but will expand upon groups of passwords.

Groups of Passwords
So if you don’t want to use a unique password per site, I would suggest that you setup groups. Suggested Groups are: social sites, hosted email, corporate or work sites, and then banking sites. This seems to be a good separation. If there is a leak of your social site password, it doesn’t affect your work or banking.

Management of your Passwords
So the next obvious question is the management of your passwords. So there is an article on support.mozilla.org about this very subject called Remembering passwords. You can use firefox’s password manager to manage all these new passwords. You can also use Firefox Sync to securely sync your passwords between your devices.

So now you are out of excuses and you can be more secure.

Chris Lyon
Director of Infrastructure Security

Want to run this on your own network?
Do you remember seeing this at the Mozilla Summit 2010?

Maybe you recall my previous blog post: Mozilla Summit – “Are We Being Secure?” and are password(s) safe?

So if you want to run this yourself, here is the location to the code and directions: svn.mozilla.org/projects/infrasec/are_we_secure. As they say, please use responsibility and have respect for people’s privacy.

Did you grab your cookies and milk?  OK, so you can forget the milk. The cookie(s) on the other hand are the type that you can eat; it is the kind that is used by the web for various purposes.

A recent article in the WSJ entitled “What They Know” analyzed the top 50 Internet web sites and examined what tracking mechanisms each site employed and the corresponding privacy policies.  Mozilla was rated as a “low exposure risk” which is all well and good, but in the process it identified 21 trackers on our web properties (aka web sites). So the obvious questions are what are those trackers, who placed them, and what do they do. Below is a little more detail to fill out the picture:

Set by Cookie If disclosed, to whom? Why? Where is it used? Notes
Mozilla Omniture Mozilla Only; Omniture is contractually bound to only share info with Mozilla. 3rd party cookie intentionally set by Mozilla and used to provide analytics for usage of Mozilla web sites. Across Mozilla domains This was listed as Mozilla because we are using our domain as the destination for all Omniture cookies. Omniture is 3rd-party analytics software used by Mozilla.
Mozilla Urchin (Google) Mozilla Only Provides analytics for usage of Mozilla web sites. No longer used. Various Mozilla domains Urchin is 3rd-party software self-hosted by Mozilla. Shows as Google b/c of acquisition. Google never receives any data.
3rd-party-based content Vimeo (Flash cookie) Vimeo.com Video publishers want to track where the video content is being used. Used in 3rd party blogs aggregated on Mozilla.com, i.e. Planet and the Add-ons blog. To our knowledge, they don’t have a “no cookie” option.
3rd-party-based content YouTube (Flash cookie) YouTube Video publishers want to track where the video content is being used. Used in 3rd party blogs aggregated on Mozilla.com, i.e. Planet and the Add-ons blog. Updated practices to discourage use.
3rd party set by blog software ShareThis (beacon) ShareThis.com 3rd party widget/plugin used on blogs from sharing content with others. Included in blogs hosted under Mozilla domains. URL was actually not working, but still shows as a cookie being set. This has since been disabled globally on our blogs.

It is important to note that the summary above represents the point of time prior to the WSJ report. To the extent that video or other content is embedded in user-generated content, and sometimes even our own posts, those cookies may change over time. That being said, the Mozilla cookies which we directly control change less frequently. These cookies provide valuable site analytics so we can both understand how our properties are used in the aggregate and learn how to improve them. Most importantly however, the information we obtain through these cookies is aggregate information that is used for no other purpose. We also have contractual provisions to protect the data Omniture collects on our behalf, and before we adopted Omniture, Mitchell Baker led a long public discussion in 2008 about the implications. In the case of Urchin, we ran that software internally, so there were no 3rd parties involved at all.

The WSJ article, in addition to contributing to the ongoing privacy dialogue, has also helped us as hopefully others. There’s always room for improvement in this area. Seeing the 3rd party cookies with embedded video called attention to something we want to discourage, but it’s also pretty hard to excise it completely. Greater awareness and more frequent house cleaning are some basic steps. We’ve also identified methods to use video in a ways that are privacy forward, such as described in Sid’s recent blog post “privacy preserving video” where he pointed out other options for video, “Flash is not the only way to display video on the web!”
We realize that privacy on the web is a hard problem to solve. It’s full of complexity, context, and balancing. It’s also uniquely personal and goes to the core of our web experience because it’s about us and what we do. But bottom line, it’s super important and we make privacy a high priority. The WSJ article shows how we do value this. We’re also working on some other initiatives in this area, which we’ll write about soon.

Does this page look familiar?

As many of you know, this is a play off the famous “Wall of Sheep”, aka “Wall of Wonder”, aka “Wall of Shame” that is displayed at most security conferences and with the Blackhat / DEFCON week just around the corner, I can’t think of a better time to discuss what was found and how this was done. I also wanted to thank my dedicated and talented intern for his hard work on this project. (Chris Van Wiemeersch) He did an excellent job with presenting this data you see above.

An initial side note, we didn’t display the user name and password because we wanted to make this an educational journey rather than a shameful experience. Beware if you are at a security conference they will not be as kin d and could possibly show your user name and password. (Some only show partial passwords)

The two questions that I got asked the most during the conference were: “How are you doing this?” and “How do we really know we are secure?” The “how to” is rather simple and there isn’t any magic or Voodoo with pulling user-names and passwords if they are going over an unencrypted channel. Many people think that since the wireless network is encrypted that their information is safe but this is an application issue and not a wireless or network problem. Some of the time, it could also just be a setting within the application to use encryption. (This really depends upon the application and if they even use encryption at all. We discovered a few that just didn’t use encryption at all.) In order to test to see if passwords are going over clear text, there are a few utilities that can pull this information off the wire or wireless network. Snort, dsniff, and ettercap all come to mind when trying to figure out if passwords are flying around in plain readable text. We used ettercap for our “Are We Being Secure” page since it was quick and easy. Once you have the utility setup, it is just a matter of watching the output.

The only way to really know you are secure and don’t have passwords flying around in plain text is to test every application on every device you use.  I just wouldn’t do that at the conference. The biggest takeaway and the main point behind the “Wall of Sheep” is bring to people’s attention that this can happen and happens more than you think. Password diversity or some throwaway passwords will also go a long way and in case your password is exposed for some social networking site, they won’t be able to access other systems.

P.S. we will be posting the code and instructions for running this system on your own. We are also looking at the data and should have something more formal to present in the near future.

One of the biggest issues with logging and  in environments where you can lots of diverse logs is getting accurate meaningful logs. In an application load balanced environment, either NetScaller, F5, Zeus, or whatever, if the load balancers are in proxy mode, you are not getting the real client IP address unless you use these settings in the Apache httpd.conf.

LogFormat "\"%{X-Forwarded-For}i, %h\" %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" proxy
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
SetEnvIf X-Forwarder-For "^.*\..*\..*\..*" is-forwarder
CustomLog logs/access_log combined env=!is-forwarder
CustomLog logs/access_log proxy env=is-forwarder

Now, your application load balancers must be setup for X-Forward-For which just puts this into the http header:

 X-FORWARDED-FOR: 1.1.1.1

Apache will grab this and insert this into the logs provide this header exists. Otherwise, it is assumed it doesn’t and will put the real IP address in the log.