that a Salesforce employee was phished, which led to some of their customers being phished
Once the phishers had the contact list, they attempted to phish Salesforce.com customers. Harris wrote: "Unfortunately, a very small number of our customers who were contacted had end users that revealed their passwords to the phisher."
I don't think I've seen documentation of this sort of cascading phishing attack before.
CNet is reporting that Paypal is launching a service
aimed at reducing phishing attacks. This sounded like good news until I started reading more.
"If a fraudulent party somehow got hold of a person's username and password, they still wouldn't be able to get into the account because they don't have the six-digit code," Sara Bettencourt, a PayPal spokeswoman, said by phone Thursday. "This by no means is a silver bullet that is going to stop fraud. This is just another layer of protection."
And CNet further comments:
eBay and PayPal are common phishing targets. These prevalent scams typically use fraudulent Web sites made to look like legitimate sites and spam e-mail to trick people into giving up their personal information such as login names and passwords.
Today phishers can make a web site that looks exactly like the real web site, including the request for the user ID and password. Users who sign up for this device will then have a user ID, a password, and a new SecurID number that changes every 30 seconds.
Is the theory behind this "layer of protection" that phishers will not know how to add one more field for victims to enter this new rotating number?
Adding this type of device will protect the early adopters. But as soon as a large percentage of customers use this device, the phishers will take it into account, and Paypal will be back to square one.
I'll be interested in reading their policies around people who lose their token, or who left their token at home while on vacation. Will PayPal provide a mechanism for customers to temporarily disable the token under these circumstances? How will they be sure it's really me asking for them to lower their security level and not a thief?
Brian Krebs' article in the Washington Post
examines "cross site scripting" (XSS) as an attack vector for phishing. Although I don't doubt that some XSS attacks are possible, and that some have already been mounted, it's another example of people ignoring the big pink elephant sitting in the middle of the room. Of course every web site should take steps to strip out any input that might be used in an XSS attack.
But financial institutions and merchants should also be wrapping their entire site in SSL
. It's super cheap
, and dramatically improves security. Simply put, sites that extensively use SSL are going to be nearly impossible to phish
. Why? Because if you know that every Well Fargo
page uses SSL (which it does!), you'll know that any Wells Fargo page without
a lock in the browser's chrome is an attack.
If there is a silver lining in all of this XSS madness, it's that for the most part, phishers have been content to try to scam online banking customers by directing them via e-mail to wholesale counterfeit sites, said Dan Hubbard, senior director of security and technology research for Websense, an anti-phishing and e-mail security company.
"We've seen a several attacks in the wild that utilize [cross-site scripting flaws] on banking sites, and it's definitely a big future threat," Hubbard said. "However, right now there is just so much low-hanging fruit for these guys that it's kind of not needed."
That's exactly right. Why bother with a sophisticated attack when for years the banks have taught their customers that they will not see the lock on login pages? Just click on File / Save As... and phish away!
I read a lot of articles about phishing. People write and talk about how you can protect yourself, cite stats on the number of attacks per month, and the cost to the economy.
But I don't hear one really important question: Why does phishing exist?
One reason is that your bank, broker, and favorite online stores fail to use SSL throughout their site. If sites that hold my important data and have access to my money used SSL everywhere on their site, I'd be able to tell if I was at the right web site by looking at the lock icon. (And yes, phishers would move to SSL, but then we'd have a chance to catch them, and to improve the CA issuance processes as a bonus!)
Another reason phishing exists is that companies outsource their marketing to third parties. In doing so, major companies mimick all the major elements that point to phishing. They make it impossible for even a security-aware individual to tell phishiers from real companies. Again, it's not that phishers are acting like real companies, it's that major companies are acting like phishers
Here's a great example. I got an email today that claimed to be from the organizers of the RSA Security Conference. It wants me to sign up for an online class. That sounds fine, but look at the URL I'll go to when I visit the web site:
(Click to enlarge)
The URL that I'm about to click on looks like this: https://rsasecurity1.rsc03.net/servlet/cc5?jkHQUYSUQUVI
I've never heard of this rsc03.net before. It's certainly not the same as rsasecurity.com
. So can I trust it?
I clicked on the link, and was indeed brought to an SSL-enabled web site. The site then proceeded to ask me for lots of personal information. That's pretty suspicious.
Then I decided to look at other URLs in the email to see what I could find. Here's what the footer says:
RSA Security respects your online privacy. This email is being sent to people who've recently inquired about RSA Security products, services or events. You can view our e-mail policy here: http://www.rsasecurity.com/node.asp?id=2470
But when I click on the link, it takes me to
It claims it will take me to a domain I trust (rsasecurity.com), but actually takes me to a domain I've never heard of before (rsc03.net). It must
be a phishing attack, right?
The sad thing is that I'm pretty sure this is a real email, and not a phishing scam. (The clumsy mail verification link at the top of the page gives me very little confidence.) But given all the clues to the contrary, it's a real gamble. Sadly, when companies start to act like phishers, they inadvertently train us to not look at the lock icon, to type our passwords into pages with no SSL, to not inspect the URLs we're about to visit. And when the organizers of the worlds biggest security conference make these mistakes, how can I reasonably hold my online bookseller to a higher standards?
I'm ready for a change.
According to this article on searchsecurity.com
, Yahoo Business email subscribers were left unprotected when they logged in. Normally the login page would post names and passwords using SSL to protect the transmission. In this case, the login page posted that information over HTTP rather than HTTPS
. My guess is that this is not the first time a site has bumped into this mistake. It's hard for QA engineers to catch silent errors like that unless they explicitly look for them.
Many web sites choose to leave the login page unprotected, and only protect the page that receives users' names and passwords. As I've said before, this approach is works if your threat model is "people are tying to eavesdrop on my connection to the web site".
But that's not the only threat these days. Given the rise in phishing attacks, sites should use SSL on both the login page, as well as the login submit page. In fact, businesses that use SSL for any purpose should strive to make SSL ubiquitous throughout their entire web site
The cost of making SSL universal on a site in terms of hardware and support is small compared to the damage than can be done when a programmer forgets to type "https
://" and instead types "http://".
When I ask people why they don't use SSL on the login page, I get a answers that can be boiled down to "SSL is too expensive". When I dig deeper, I find more complexity than I expected.
First, let me address the issue of raw SSL performance. There was a time when it was simply too expensive to contemplate SSL for very many tasks. SSL operations were performed on the same CPU that was used to perform database lookups, generate HTML, and a variety of other functions. The expensive RSA private key operations caused a significant drain on the servers.
But things are different now. Web farm architectures are far more sophisticated than they were in 1997. And between Moore's Law and performance tuning in the SSL libraries (like the open source NSS crypto libraries
) we now see very high performance number for SSL connections, even in software. We were recently doing some performance tests for a project, and while we had all the measurement tools fired up, we decided to see how many SSL-based logins we could get per second. On a $5,000 Dell box, we were able to get about 1,000 logins per second, which translates to 3,600,000 logins per hour. Depending on how we defined "login" (the ratio of RSA handshakes to SSL restarts), we could hit 5,000,000 logins per hour. Assuming you want a little breathing room for peak loads, you could throw another $5,000 box into the mix. Or toss in 10 and spread them around the country. Or you could buy SSL accelerators (either PCI cards, or as front-end balancers), though I doubt they are necessary these days given these numbers for software-based SSL.
The other complaint which I used to hear in 1998, but have not heard in the past several years, is that modems cannot compress SSL sessions, which means that customers on modems will see worse performance on an HTTPS page than they will on an HTTP page. That fact remains (you can't dramatically compress properly encrypted data) but given the slide in modem usage, this is less and less of a problem over time. And now that AOL is raising rates on modem users
, that trend will accelerate.Bottom line: In 2006, SSL is super cheap.
You have to plan for SSL performance, but it's not going to be a major task compared to the complexity of the rest of the web farm architectures that exist out there. And it's not going to cost much at all. For companies that are worth billions of dollars, I think they can easily find room for a few $5,000 boxes to improve security.
Knowing that, why do companies still not deploy SSL as widely as they should? That's the second, and more surprising part of the story. The reality is that these companies are segmented into divisions, each of which has its own goals, budgets, and problems. There's often a "login team" that is responsible for providing a single-sign-on scheme so that other divisions don't have to manage their own account/password databases and cookie-passing scheme. Then there's the team that deals with the front page. Then there's the team that deals with the primary service (banking, trading, mail, or whatever their core app is). Then there are groups that deal with the aftermath of fraud, like phishing.
These divisions are not connected as cleanly as they might be. There is little incentive for the login team, for example, to help the Front Page team use SSL correctly. They are there to provide SSL when you hit the "Login" submit button and that's what they do. Meanwhile the Front Page team believes that they cannot afford to put SSL on their portion of the site on their own. And the fraud team's main charter is to help customers who have been victimized, not to stop the problem in the first place.
Every survey since 1492 about computer security and the Internet has shown that CEOs, politicians, consumers, and dogs believe that "Security is the #1 issue facing us today". And yet, as far as I can see, there is a real failure of leadership at the senior-most levels of these companies to connect the dots and to put SSL on every page. This is especially true of the financial companies.
And the terrible part is that it's the consumers who pick of the tab of phishing attacks, not just in dollars, but in time and stress as well.
It's 2006 and there is every reason for a company in the financial world to implement a coherent site security strategy that involves SSL on every page
. Wells Fargo did it! SSL is cheap, security expertise is widely available, and customers are under attack on a daily basis.
In my last entry, I said that some of the biggest names in banking were guilty of teaching their customers that it was not necessary to check for the lock icon in the browser before they typed in their account names and passwords. Let's look at some specifics.
As of this writing, Washington Mutual, Bank of America, Bank of the West, and Chase Bank all accept name/password login from their home page, a page which is not protected by SSL. (I just picked those at random; the problem is more pervasive: see http://www.cs.biu.ac.il/~herzbea/Shame.html
for more information) So does AOL. Yahoo always uses SSL, but only when you click on the submit button. MSN sends passwords in the clear unless you click on the link titled "Sign in using enhanced security". And when I did that, I got certificate errors because they botched the SSL configuration.
Beyond the bad security practices, Washington Mutual's help page says
Non-secure Web pages. Clever thieves can build a fake Web site that looks nearly identical to an authentic one. They can even alter the URL (the Web address) that appears in your browser window. Watch out for non-secure Web pages that ask for sensitive information (secure sites will typically display a lock in the status bar at the bottom of your browser window).
And yet their site does exactly the reverse. It asks for customer names and passwords without a lock! This is a great example of a real web site acting like a phishing site.
Speaking of locks, many of these financial institutions work hard to break the user's mental model of security. They actually put an icon of a padlock in the HTML next to the login fields! But since anyone can put a lock icon into HTML, it adds nothing but the illusion of security.
I'm not the first to notice that banks have undone the user education that we promoted so heavily. From http://www.antiphishing.org/Phishing-dhs-report.pdf
Financial institutions have widely deviated from the guidelines they have disseminated for distinguishing phishing messages from legitimate communications, undermining the educational messages they have distributed. In particular, many financial institutions use unexpected domain names similar to the names a phisher would use, do not use SSL in a user-verifiable way on a login page, include clickable links in email communications, and so on.
Not all of the major sites do it wrong. Wells Fargo does it better than any other site I could find. Their entire site is protected by SSL. That's exactly the right approach to reduce the chances that people are fooled by phishing scams. There are no edge cases. Even their pages which offer generic information, like the ATM locator pages, are protected with SSL. The rule is simple: if you don't see the lock icon on every page, you are not on the Wells Fargo web site. Bravo!
Other sites that include SSL on the login page include the Gap and Ebay. So there are sites that understand the issue, but I'm starting to conclude that they are in the minority.
Next time: Poor excuses for not using SSL.
I've been poking around various web sites and have noticed that major on-line services often fail to use SSL to protect their login pages. I'm not talking about sites where they don't need, or don't care enough to use SSL. I'm referring to major sites that use SSL, but only after you hit the Submit button. From a password security standpoint, this is sufficient. You don't actually need to protect the page that says "Type in your user name and password". It does not matter if hackers see that
page, as long as they cannot see your name and password when you submit it. At least it didn't used to matter.
The problem today is that phishing
has added some complexity to the security analysis of this situation. When phishers send out emails, they tell the victims that there's been some sort of problem, and they need to login to the web site to clear things up. They provide a link in the email which would normally take the victims to the real web site, but in the case of these attacks, they take the victim to a web site that is under the control of the phishers. These phishing emails and web sites are getting very good. It may not be possible to tell just by looking at the web page itself if it's your bank's web site, or the phisher's.
Why does this attack work?
In the early days of Netscape, we spent a lot of time and energy to train the press and customers that they should never
personal information into the browser unless they saw the lock icon. Long before the word "phishing" was coined, we knew bad people might do something which would be thwarted if users followed this advice. And it worked for quite some time. In fact, we had usability testing which showed that this training worked.
Sadly, things have changed. Many major companies have implemented their web sites so as to undo this user training. Users now have no way of knowing what's real and what's not real because their banks, travel agencies, and merchants have taught them through repeated experience that it's OK to type their account information and passwords into pages where there is no lock icon. And because in most cases, nothing bad happens, people got used to it.
It's really disturbing that so many of the biggest sites with the biggest brand names fail to put SSL on their login pages. In many ways, they have unwittingly created the environment that allows phishers to thrive.
In my next few posts, I'll explore the banking situation, some myths around SSL, and some best practices I'd like to see all financial institutions follow.