Monday 28 April 2014

PHISHING ATTACKS ON TELECOMMUNICATION CUSTOMERS RESULTING IN ACCOUNT TAKEOVERS CONTINUE

PHISHING ATTACKS ON TELECOMMUNICATION CUSTOMERS RESULTING IN ACCOUNT TAKEOVERS CONTINUE

Phishing attacks targeting various telecommunication companies’ customers continue. Individuals receive automated telephone calls that claim to be from the victim’s telecommunication carrier. The IC3 released an advisory about this scam in May 2013. Since then, the attacks have increased and recently, victims have reported receiving SMS texts with a similar phishing message encouraging them to go to web sites to claim their reward. Victims are directed to a phishing site to receive a credit, discount or prize ranging from $100 to $2,500. The monetary amounts being offered are increasing to make the scam more enticing. A fraudulent web site example would be www.My(insertphone company name)900.com. Other fraudulent web sites may contain words such as, MyBonus, ILove, ILike, Reward, Promo, or similar words, along with a telephone company’s name.
The phishing site is a replica of one of the telecommunication carrier’s sites and requests the victim’s log-in credentials and the last four digits of their Social Security number. Once access is gained, the subject makes changes to the customer’s account and may place orders for mobile phones.
The IC3 urges the public to be cautious of unsolicited telephone calls, e-mails and text messages, especially those promising some type of compensation for supplying account information. If you receive such an offer, verify it with the business associated with your account before supplying any information. Use the phone numbers that appear on your account statement to contact the business.
If you have fallen victim to this scam, immediately notify your telecommunication carrier and file a complaint with the IC3, http://www.ic3.gov.

How to Read a Cookie

Cookies provide a means in Web applications to store user-specific information, such as history or user preferences. A cookie is a small bit of text that accompanies requests and responses as they go between the Web server and client. The cookie contains information that the Web application can read whenever the user visits the site.

The browser is responsible for managing cookies on a user system. Cookies are sent to the server with a page request and are accessible as part of the HttpRequest object, which exposes a Cookies collection. You can read only cookies that have been created by pages in the current domain or path.

Procedure
To read a cookie

Read a string from the Cookies collection using the cookie's name as the key.

The following example reads a cookie named UserSettings and then reads the value of the subkey named Font.
Visual Basic

If (Request.Cookies("UserSettings") IsNot Nothing) Then
Dim userSettings As String
If (Request.Cookies("UserSettings")("Font") IsNot Nothing) Then
userSettings = Request.Cookies("UserSettings")("Font")
End If
End If

Code in C#:

if (Request.Cookies["UserSettings"] != null)
{
string userSettings;
if (Request.Cookies["UserSettings"]["Font"] != null)
{ userSettings = Request.Cookies["UserSettings"]["Font"]; }
}

Compiling the Code

This example requires:

An ASP.NET Web page.
A cookie written previously named UserSettings

Friday 25 April 2014

Heartbleed: Pointer-arithmetic considered harmful


Heartbleed has encouraged people to look at the OpenSSL source code. Many have called it "spaghetti code" -- tangled, fragile, and hard to maintain. While this characterization is accurate, it's unfair. OpenSSL is written according to standard programming practices. It's those practices which are at fault. If you get new engineers to rewrite the code, they'll follow the same practices, and end up with equally tangled code.

Coding practices are out of date, laughably so. If you learn how to program in C in a university today, your textbook and your professor will teach you how to write code as if it were 1984 and not 2014. They will teach you to use "strcpy()", a function prone to buffer-overflows that is widely banned in modern projects. There are fifty other issues with C that are just as important.

In this post, I'm going to focus on one of those out-of-date practices called "pointer-arithmetic". It's a feature unique to C. No other language allows it -- for good reason. Pointer-arithmetic leads to unstable, hard-to-maintain code.

In normal languages, if you want to enumerate all the elements in an array, you'd do so with with an expression like the following:

     p[i++]

The above code works in a wide variety of programming languages. It works in C, too, and indeed, most languages got it by copying C syntax. However, in C, you may optionally use a different expression:

    *p++

This is pointer-arithmetic. Instead of a fixed pointer and a variable index, the pointer is variable, moving through the array.

To demonstrate how this gets you into trouble, I present the following bit of code from "openssl/ssl/s3_srvr.c":

   {
s2n(strlen(s->ctx->psk_identity_hint), p);
strncpy((char *)p, s->ctx->psk_identity_hint, strlen(s->ctx->psk_identity_hint));
p+=strlen(s->ctx->psk_identity_hint);
}

The first thing to notice is the line I've highlighted. This line contains the old programming joke:

    strncpy(dst,src,strlen(src));

The purpose of strncpy() is to guard against buffer-overflows by double-checking the size of the destination. The joke version double-checks the size of the source -- defeating the purpose, causing the same buffer-overflow as if the programmer had just used the original strcpy() in the first place.

This is a funny bit of code, but it turns out it's not stupid. In C, text strings are nul terminated, meaning that a byte with the value of 0 is added to the end of every string. The intent of the code above is to prevent the nul termination, not to prevent buffer-overflows. In other words, the true intent of the programmer can be expressed changing the above function from "strncpy()" to "memcpy()".

The reason the programmer wants to avoid nul termination is because they are building a protocol buffer where the string will be prefixed by a length. That's the effect of the macro "s2n()" in the first line of code, which inserts a 2 byte length field and invisibly moves the pointer 'p' forward two bytes. (By the way, macros that invisible alter variables are likewise bad programming practice).

The correct fix for the above code is to change from a pointer-arithmetic paradigm to an integer-indexed paradigm. The code would look like the following:

append_short(p, &offset, max, strlen(s->ctx->psk_identity_hint));
append_string(p, &offset, max, s->ctx->psk_identity_hint);

The value 'p' remains fixed, we increment the "offset" as we append fields, and we track the maximum size of the buffer with the variable "max". This both untangles the code and also makes it inherently safe, preventing buffer-overflows.
Last year, college professor John Regehr had a little contest to write a simple function to parse integers. Most solutions to the contest used the pointer-arithmetic approach, only a few (like my solution) used the integer-index paradigm. I urge you to click on those links and compare other solutions to mine.

My solution, using integer indexes

Typical other solution, using pointer-arithmetic


Many justify pointer-arithmetic claiming it's faster. This isn't really true. In the above contest, my solution was one of the fastest solutions. Indeed, I'm famous for the fact that my code is usually an order of magnitude faster than other people's code. Sure, you can show with some micro-benchmarks that pointer-arithmetic is faster in some cases, but that difference rarely matters. The simplest rule is to never use it -- and if you ever do, write a big comment block explaining why you are doing something so ugly, and include the benchmarks proving it's faster.

Others justify pointer-arithmetic out of language bigotry. We are taught to look down at people who try to program in one language as if it were another language. If you program in C the way you'd program in Java, then (according to this theory) you should just stick with Java. That my snippet of code above works equally well in Java and C is considered a bad thing.

This bigotry is wrong. Yes, when a language gives you desirable constructs, you should use them. But pointer-arithmetic isn't desirable. We use C not because it's a good language, but because it's low-level and the lingua franca of libraries. We can write a library in C for use with Java, but not the reverse. We use C because we have to. We shouldn't be programming in the C paradigm -- we should be adopting the paradigms of other languages. For example, C should be "object oriented", where complex structures have clear constructors, destructors, and accessor member functions. C is hostile to that paradigm of programming -- but it's still the right way to program.


Pointer-arithmetic is just one of many issues effecting the OpenSSL source-base. I point it out here because of the lulz of the strncpy() function. Perhaps in later posts I'll describe some of it's other flaws.



Update: Another good style is "functional" programming, where functions avoid "side effects". Again, C is hostile to the idea, but when coder's can avoid side-effects, they should.

Chrome Beta for Android Update

Chrome Beta for Android has been updated to 35.0.1916.69 and will be available in Google Play over the next few hours. This release contains stability and bug fixes. A partial list of changes in this build is available in the SVN revision log. If you find a new issue, please let us know by filing a bug. More information about Chrome for Android is available on the Chrome site.

Jason Kersey
Google Chrome

Speeding up and strengthening HTTPS connections for Chrome on Android

Speeding up and strengthening HTTPS connections for Chrome on Android:
Earlier this year, we deployed a new TLS cipher suite in Chrome that operates three times faster than AES-GCM on devices that don’t have AES hardware acceleration, including most Android phones, wearable devices such as Google Glass and older computers. This improves user experience, reducing latency and saving battery life by cutting down the amount of time spent encrypting and decrypting data. 

To make this happen, Adam Langley, Wan-Teh Chang, Ben Laurie and I began implementing new algorithms -- ChaCha 20 for symmetric encryption and Poly1305 for authentication -- in OpenSSL and NSS in March 2013. It was a complex effort that required implementing a new abstraction layer in OpenSSL in order to support the Authenticated Encryption with Associated Data (AEAD) encryption mode properly. AEAD enables encryption and authentication to happen concurrently, making it easier to use and optimize than older, commonly-used modes such as CBC. Moreover, recent attacks against RC4 and CBC also prompted us to make this change. 

The benefits of this new cipher suite include:
  • Better security: ChaCha20 is immune to padding-oracle attacks, such as the Lucky13, which affect CBC mode as used in TLS. By design, ChaCha20 is also immune to timing attacks. Check out a detailed description of TLS ciphersuites weaknesses in our earlier post.
  • Better performance: ChaCha20 and Poly1305 are very fast on mobile and wearable devices, as their designs are able to leverage common CPU instructions, including ARM vector instructions. Poly1305 also saves network bandwidth, since its output is only 16 bytes compared to HMAC-SHA1, which is 20 bytes. This represents a 16% reduction of the TLS network overhead incurred when using older ciphersuites such as RC4-SHA or AES-SHA. The expected acceleration compared to AES-GCM for various platforms is summarized in the chart below.
Encryption

As of February 2014, almost all HTTPS connections made from Chrome browsers on Android devices to Google properties have used this new cipher suite. We plan to make it available as part of the Android platform in a future release. If you’d like to verify which cipher suite Chrome is currently using, on an Android device or on desktop, just click on the padlock in the URL bar and look at the connection tab. If Chrome is using ChaCha20-Poly1305 you will see the following information:

android certificate information

ChaCha20 and Poly1305 were designed by Prof. Dan Bernstein from the University of Illinois at Chicago. The simple and efficient design of these algorithms combined with the extensive vetting they received from the scientific community make us confident that these algorithms will bring the security and speed needed to secure mobile communication. Moreover, selecting algorithms that are free for everyone to use is also in line with our commitment to openness and transparency.

We would like to thank the people who made this possible: Dan Bernstein who invented and implemented both ChaCha/20 and Poly1305, Andrew Moon for his open-source implementation of Poly1305, Ted Kravitz for his open-source implementation of ChaCha20 and Peter Schwabe for his implementation work. We hope there will be even greater adoption of this cipher suite, and look forward to seeing other websites deprecate AES-SHA1 and RC4-SHA1 in favor of AES-GCM and ChaCha20-Poly1305 since they offer safer and faster alternatives. IETF draft standards for this cipher suite are available here and here.

Monday 21 April 2014

Wildcard DNS, Content Poisoning, XSS and Certificate Pinning

Hi everyone, this time I'm going o talk about an interesting vulnerability that I reported to Google and Facebook a couple of months ago. I had some spare time last October and I started testing for vulnerabilities on a few companies with established bug bounty programs. Google awarded me with $5000,00 and Facebook payed me $500,00 for reporting the bugs.

I know you may be more interested on highly sophisticated exploits that allow arbitrary file upload to the Internet, with custom payloads that may lead to unexpected behavior like closing Security Lists. Hopefully this class of bugs is already patched by Fyodor and Attrition is offering an efficient exploit mitigation technique.

The title may be a little confusing, but I'm going to show that it's possible to combine all these techniques to exploit vulnerable systems.

Content Poisoning and Wildcard DNS

Host header poisoning occurs when the application doesn't validate full URL's generated from the HTTP Host header, including the domain name. Recently, the Django Framework fixed a few vulnerabilities related to that and James Kettle made an interesting post discussing lots of attack scenarios using host header attacks.

While testing this issue, I found a different kind of Host header attack that abuses the possibility to browse wildcard domains. Let's have a quick look at the Wikipedia entry on Hostnames:
"The Internet standards (Request for Comments) for protocols mandate that component hostname labels may contain only the ASCII letters 'a' through 'z' (in a case-insensitive manner), the digits '0' through '9', and the hyphen ('-'). The original specification of hostnames in RFC 952, mandated that labels could not start with a digit or with a hyphen, and must not end with a hyphen. However, a subsequent specification (RFC 1123) permitted hostname labels to start with digits. No other symbols, punctuation characters, or white space are permitted."
The fun part here is that the network stack from Windows, Linux and Mac OS X consider domains like -www.plus.google.com, www-.plus.google.com and www.-.plus.google.com valid. It's interesting to note that Android won't resolve these domains for some reason.



Take, for example, the following URL: https://www.example.com.-.www.sites.google.com. If we compose an e-mail and paste it on the body, GMail will split them and the received message will have two “clickable” parts (https://www.example.com and sites.google.com).


Most e-mail based notification use the very same host you are browsing in order to compose the notification messages: you see where this is going, right?

Facebook has a wildcard DNS entry at zero.facebook.com. In order to exploit the flaw, we have to browse the service using a poisoned URL and perform actions that may need e-mail confirmation, checking whether Facebook mails the crafted URL to the user.


The only vulnerable endpoint that I found affected by this issue was the registration e-mail confirmation. You may be asking, how could one exploit this to attack a legitimate user?

Suppose I want to attack the Facebook account from goodguy@example.com. I can create or associate a "duplicate" account using the "+" sign by browsing Facebook with these injected URL's. If I navigate to Facebook using an URL like https://www.example.com.-.zero.facebook.com, all I have to do is create the duplicate account goodguy+DUPLICATE@example.com. Most e-mail services like GMail and Hotmail don't consider what you type after the "+" and forward it to the original account.

In this case, all e-mails that Facebook sent to confirm that association had the poisoned links.


This can also be used to poison password reset emails, but Facebook forms were not affected. They quickly fixed that by hard coding the proper URL to their e-mail confirmation system. It's also possible (but not recommended) to fix these issues by sending notifications with relative links instead of complete URL's ("please click here" instead of "please click on the specified url: www.example.com.-.zero.facebook.com").

XSS and Wildcard DNS

While searching for these issues on Google I quickly found wildcard domains like:

https://w00t.drive.google.com
https://w00t.script.google.com
https://w00t.sites.google.com

In case you're wondering how to quickly find these wildcard domains, you can download and lookup for them on the scans.io datasets. You can find these references on the Reverse DNS records or by searching for SSL certificates issued to wildcard domains, like *.sites.google.com.

During my initial tests, I was unable to craft URL's using .-. inside the drive.google.com domain (got 500 error messages) and all I could do was creating URL’s like this: https://www.example.com-----www.drive.google.com.

When you browse Google Drive using this URL, upload a File to a Folder and try to Zip/Download it asking for an e-mail confirmation (“Email when ready”), the e-mail confirmation message will be like this:


The "ready for downloading" link would point to https://www.example.com-----www.drive.google.com/export-result?archiveId=REDACTED. So far no big deal, I was still unable to poison the links... And phishing yourself is not that useful =)

I kept testing different URL's until I found a weird behavior on Google DNS Servers. When typing URL's containing a domain you control followed by a certain number of "-" and the wildcard domain from Google, the resolved IP would be the one from the URL you control.

My highly sophisticated Fuzzer in action
For some reason, there was a glitch on their DNS servers, more specifically in the regexp that stripped "--" from the domain prefixes. I'm not sure why they performed these checks but that may have something to do with Internationalized Domain Names.

XKCD's take on the bug
Some Google domains affected by this issue (October 2013):

docs.google.com
docs.sandbox.google.com
drive.google.com
drive.sandbox.google.com
glass.ext.google.com
prom-qa.sandbox.google.com
prom-test.sandbox.google.com
sandbox.google.com
script.google.com
script.sandbox.google.com
sites.google.com
sites.sandbox.google.com

Now that I can impersonate a Google's domain, it's possible abuse the Same Origin policy and issue requests on behalf of a logged user. lcamtuf already told us about HTTP cookies, or how not to design protocols. What happens if we control www.example.com and the logged user from drive.google.com visits the crafted URL http://www.example.com---.drive.google.com?

Request goes to legitimate site:


Requests goes to the user-controlled site, in this case my own server running nginx:


This leverages to a XSS-like attack: you have now bypassed the same origin and you can steal cookies and run scripts on the context of the site, for example.
Certificate Pinning and Wildcard DNS

So far so good, but what if we were performing the same tests on Google Chrome, which enforces Certificate Pinning for their domains? I didn't notice at first, but I accidentally found an issue on Chrome too: it was failing to perform the proper HSTS checks for these non-RFC compliant domains.

Other parts of the network stack were processing and fetching results from these "invalid" DNS names, but TransportSecurityState was rejecting them and therefore HSTS policies didn't apply. They simply removed the sanity checks to make TransportSecurityState more promiscuous in what it process.



You can easily reproduce this on Chrome prior to v31: proxy Chrome through OWASP ZAP (accepting its certificate), visit URL’s like https://sites.google.com and Chrome will display a “heightened security” error message. If you type URL’s like https://www-.sites.google.com or https://www-.plus.google.com Chrome offers the option to “Proceed anyway”. If you're in Turkey right now you don't need to do nothing, the Turkish Telecom does all the MITM job for you.



It's worth mentioning that when you issue a wildcard certificate for your host, it will be valid for a single level only. Certificates issued to *.google.com should not be trusted when used on domains like abc.def.google.com.

The hardcoded list of domains and pinned certificates from Chrome can be found here:

https://src.chromium.org/viewvc/chrome/trunk/src/net/http/transport_security_state_static.json

During my analysis, I found that 55 out of 397 domains with Transport Security enabled had wildcard entries on their DNS. A nation sponsored attacker, with a valid and trusted CA could simply MITM your traffic and inject requests to these invalid domains, circumventing the HSTS policies and stealing session cookies, for example.

Google did not assign a CVE for that bug, but they fixed that within a couple of weeks. Chrome 32 and 33+ (the one that changed the SSL warning from red to yellow) are not affected by this issue.

In times of Goto fails, it was really interesting to follow the Chromium's tracker, their internal discussions, tests performed and so on. The commits fixing these issues can be found here.

Conclusion

Google and Facebook security teams were both great to deal with. The bug was quite fun as well because it was different from the traditional OWASP Top 10 issues.

And because the industry totally needs new Vulnerability terminologies, anyone willing to refer to these attacks shall name them Advanced Persistent Cross Site Wildcard Domain Header Poisoning (or simply APCSWDHP).

In case you're from NSA and want to use this technique to implant our DNS's, please use the codename CRAZY KOALA so we could better track them when the next Snowden leaks your documents

Google Services Updated to Address OpenSSL CVE-2014-0160 (the Heartbleed bug)

You may have heard of “Heartbleed,” a flaw in OpenSSL that could allow the theft of data normally protected by SSL/TLS encryption. We’ve assessed this vulnerability and applied patches to key Google services such as Search, Gmail, YouTube, Wallet, Play, Apps, App Engine, AdWords, DoubleClick, Maps, Maps Engine, Earth, Analytics and Tag Manager.  Google Chrome and Chrome OS are not affected. We are still working to patch some other Google services. We regularly and proactively look for vulnerabilities like this -- and encourage others to report them -- so that that we can fix software flaws before they are exploited. 

If you are a Google Cloud Platform or Google Search Appliance customer, or don’t use the latest version of Android, here is what you need to know:

Cloud SQL
We are currently patching Cloud SQL, with the patch rolling out to all instances today and tomorrow. In the meantime, users should use the IP whitelisting function to ensure that only known hosts can access their instances. Please find instructions here.

Google Compute Engine
Customers need to manually update OpenSSL on each running instance or should replace any existing images with versions including an updated OpenSSL. Once updated, each instance should be rebooted to ensure all running processes are using the updated SSL library. Please find instructions here.

Google Search Appliance (GSA)
Engineers have patched GSA and issued notices to customers. More information is available in the Google Enterprise Support Portal.

Android
All versions of Android are immune to CVE-2014-0160 (with the limited exception of Android 4.1.1; patching information for Android 4.1.1 is being distributed to Android partners).

We will continue working closely with the security research and open source communities, as doing so is one of the best ways we know to keep our users safe.

Apr 12: Updated to add Google AdWords, DoubleClick, Maps, Maps Engine and Earth to the list of Google services that were patched early, but inadvertently left out at the time of original posting.

Apr 14: In light of new research on extracting keys using the Heartbleed bug, we are recommending that Google Compute Engine (GCE) customers create new keys for any affected SSL services. Google Search Appliance (GSA) customers should also consider creating new keys after patching their GSA. Engineers are working on a patch for the GSA, and the Google Enterprise Support Portal will be updated with the patch as soon as it is available.

Also updated to add Google Analytics and Tag Manager to the list of Google services that were patched early, but inadvertently left out at the time of original posting.

Apr 16: Updated to include information about GSA patch.