Fuzzing Coverage

February 2nd, 2010

Check out the latest update on fuzzing coverage: http://www.codenomicon.com/products/coverage.shtml

The most important metric for comparing fuzzing approaches is the amount of flaws the method finds. A simple approach, like random fuzzer, requires virtually no investments, but they can only find 10% of the vulnerabilities hiding in the software. A mutation-based fuzzer can find around 50% of the flaws. Once again the tool investments are minimal, but cost of using the tools and integrating them into the development processes are a lot greater. A fully model-based fuzzer can find as much as 80-90% of the flaws, but it can be the most expensive method to build and maintain. The choice of the tool is often based on integration capabilities and challenges, not coverage. If the protocol is standards-based, a model based fuzzer is often the right choice. But especially with emerging technologies and agile development processes, the specifications needed to create model-based tests are not always available. There might be no consensus on the protocol specification or the specification changes rapidly, or in some special cases the specifications are proprietary and not available to testers. In such cases traffic captures can be used to create mutation-based fuzzers. When the amount of interfaces is vast, random fuzzing might be a budgetary choice to get at least some fuzz test coverage before more advanced capabilities are introduced to the development process.

Microsoft SDL for Agile, and then some fuzzing

November 11th, 2009

From Visual Studio Magazine:

2009/11/10/qa-bryan-sullivan-sdl

Microsoft Security Program Manager in the SDL team, Bryan Sullivan: “It is important to fuzz your parsing code periodically, but you are probably not going to find so many potential vulnerabilities doing so that you need to fuzz it every sprint.”

Interestingly, with some type of fuzzing you do not need to be that picky. In many environments that I have visited, fuzzing is automated to the build process, i.e. automatically when the code builds, the fuzz process starts in the background. And why not? It is not like you need any person to monitor a fuzz process especially if you do not expect to find anything. Fuzzing actually fits very nicely to any programmers’ automated unit test process, especially if you are coding protocol stacks or simple applications on top of industry standard protocols such as HTTP, SOAP and SIP.

Academic Use of the Fuzz-Book?

July 20th, 2009

I know it is being used in the academia… but it would be great to hear what you thought of it. Please submit anonymous comments here, or just email me if you have any feedback. Or just let us know that you have read the book!

First Review Added

April 24th, 2009

I know there have been several others, but for some reason I have forgotten to add them here. Please let us know if you have written or seen one somewhere.

One Of The Major Challenges In Writing A Fuzzing Book

March 17th, 2009

When you do work in fuzzing domain, it is extremely challenging sometimes to get people to speak about it. Most people see that using fuzzing gives such a competitive edge that revealing the use of fuzzing tools would hurt them in the process. Things really changed during last year. We now have people reporting huge ROI using fuzzing (and publicly). We have leading Fortune-500 companies dictating the use of fuzzing in their RFPs in the procurement process. We also see marketing campaigns from companies like Google publicly advertising how proud they are that they also do fuzzing.

The BSIMM study by Cigital creates a new mile stone in the domain of market studies in fuzzing. It might not even attempt to describe how common the use of fuzzing is. The sample of companies also do not really indicate anything about the rest of the users of fuzzers. And the interview process itself might have not really given much emphasis on fuzzing, as all the authors really come from the static analysis mindset. But surprisingly enough, all top product security teams were found to be doing fuzzing already!

Another major milestone is studying the use of fuzzing is the inclusion of fuzzing related questions to the Forrester questionnaire, completed by thousands of CIO/CSO/CISO people annually.

I personally look forward to hearing what Cigital and Forrester have to say about the use of fuzzing. If you are interested, please give us a shout here: Fuzzing 101

Reminder: Absolutely Finally The Last Chance To Win

January 21st, 2009

… until we get a new sponsor for some more books to give out! Tell us why you should have one of the books, and surprise surprise you might get one!

http://www.codenomicon.com/fuzzing-book/

Fuzzing Is A Surprise To Some, But Not To Us - Right?

January 7th, 2009

Check out this article.

The authors (Gary McGraw, Brian Chess, and Sammy Migues) interviewed leading product security teams in the industry, and collected the findings. The most important discovery (or maybe the biggest surprise to the authors?) was:

0. Fuzz testing is widespread.
What kind of “last bullet” is that on a top ten list?! Let us explain. Way back in 1997 in the book Software Fault Injection, Jeff Voas and McGraw wrote about many kinds of testing that can be imposed on software. We wondered whether security was a special case for software testing. One classic way to probe software “reliability” is to send noise to a program and see what happens, i.e., fuzzing. Somehow the security community has morphed this technique into a widely applied way to look for software security problems. Wow. Who would have guessed that reliability trumps security?”

The importance of finding real and certainly critical issues in software has finally been noted as the highest priority by all leading security organizations! But we knew that, because we have been helping them in the process. ;)

Interesting - Kind of Related to Fuzzing

November 13th, 2008

I have been reading a number of QA papers and books recently to catch up from past busy times. If you have time, have a look at some QA topics through your favorite search engine:

  • Test generation
  • Random testing, Adaptive random testing
  • Hypercuboids
  • Statecharts
  • Model based testing
  • Modified Condition/Decision Coverage (MC/DC)

For example Jayaram & Mathur from Purdue are explaining interesting measurements of using statecharts as the basis of generating message sequences for complex protocols such as TLS. Sounds pretty similar to fuzzing, at least to me, although the research at this phase is nowhere in the same domain. Today most block-based fuzzers (although some of them call themselves model-based) use extremely limited message sequence coverage, with the worst of them only take a capture of traffic, and then mutate that. The drawback with this is that you will only do message structure fuzzing, the most basic form of fuzzing.

Then if you look at the work of e.g. Gotlieband and Petit from INRIA, you can get a glimpse of what the QA people are looking at in the area of test generation. Any individual field in the protocol message can (potentially) automatically generate its own set of data based on a very basic assumptions, and therefore optimize those to finally do some intelligent permutations of multi-anomaly fuzzing. Long gone are those static libraries of anomalies (again very few real fuzzers use them today). The result is less test cases, and better test coverage.

It is interesting to see where fuzzing will go in the future, and how companies with QA background, and companies with security background will either end up in the same direction, or very different direction.

Winners Have Been Notified

October 6th, 2008

Eight lucky winners have been notified. The publisher should send more copies shortly (only received six so far) and then the fuzzing process will continue… Until then, we are still accepting new participants to the draw!

The best “Why Me” comments are also under selection process. Here is a sample of some of them (from current winners, who unfortunately will not get a chance to get a second copy):

  • “I’ve got to have it! They’re all out to get me!” by Steve Abler
  • “I am passionate about application security and the need for robust testing methods. I am an application security evangelist who proactively educates developers, development mangers, security practitioners and executive management. I am currently lobbying for a corporate team to be tasked with supporting SSDLC using whitebox and blackbox tools. In short, I am someone who will both benefit from and provide value with the knowledge I can gain from this book.” by Jaime Castells
  • “Because it is the first resource I’ve seen that connects the dots between software QA and IT security - two topics that have fascinated, frustrated, and perplexed me for many years.” by Alex Chapman
  • “Keep your friends close and your enemies closer. Having this book will help me to keep hackers close but not that close.” by Richard N Price
  • “I need to understand the threats facing our applications better. We want to pull together a lab where we don’t just interrogate software (checking what APIs are called and if the app has the authorization) we want to black box test the app. The book would help us realize that goal.” by Loraine Beyer
  • “To restore my faith in Lady Luck.” by Laszlo Bortel
  • “Application testing for security flaws has become the next major defense against blended threats and this book shows you how to start and improve your fuzzing skills.” by Russell Weatherly
  • “SW Quality is a fuzzy subject, SW Security Quality doubly so! As a quality expert I see security testing important, but find that engineering the SW security quality intentionally in place in the development process is even more critical. I (and my team) needs to learn this.” by Erkki Pöyhönen

Congratulations to all winners!

Book Draw Results Oct 05

October 2nd, 2008

Last chance to participate in the book draw … I will (try to) email everyone with the result, whether you won or not. So no worries if you have not heard from me yet!

Update: My ITworld blog