Free Newsletter
Register for our Free Newsletters
Newsletter
Zones
Access Control
LeftNav
Alarms
LeftNav
Biometrics
LeftNav
Detection
LeftNav
Deutsche Zone (German Zone)
LeftNav
Education, Training and Professional Services
LeftNav
Government Programmes
LeftNav
Guarding, Equipment and Enforcement
LeftNav
Industrial Computing Security
LeftNav
IT Security
LeftNav
Physical Security
LeftNav
Surveillance
LeftNav
View All
Other Carouselweb publications
Carousel Web
Defense File
New Materials
Pro Health Zone
Pro Manufacturing Zone
Pro Security Zone
Web Lec
 
ProSecurityZone Sponsor
 
ProSecurityZone Sponsor
 
ProSecurityZone Sponsor
 
ProSecurityZone Sponsor
 
ProSecurityZone Sponsor
 
ProSecurityZone Sponsor
 
 
News

Maintaining security levels with robustness testing

InfoSecurity Europe : 17 April, 2009  (Special Report)
Ari Takane of Codenomicon explains why compliance doesn't equal security and that robustness testing is a key aspect of maintaining a secure IT environment
See our events guide listing for more details

Security standards require you to built best practices in IT security. The practices defined in the standards are unfortunately somewhat behind on the best practices available in the industry. New security vulnerabilities emerge constantly and adjustment on the used protection measures and vulnerability detection techniques are always needed according and beyond those defined in the standards.

Reactive and proactive security

One example of recent security paradigms is the movement from reactive security tools such as firewalls, IDS systems and security scans into proactive tools that find and protect against zero-day threats, that is against the vulnerabilities which the manufacturers, developers and vendors are unaware off. There are no protection measures available for zero-day weaknesses. The only means of protecting against these is to find the weaknesses before someone else does that. And that is how you become proactive.

Finding security vulnerabilities

There are two ways of finding new previously unknown vulnerabilities in software. First method is code auditing, but unfortunately it requires access to source code, and is encumbered with high rate of false positives, reports of weaknesses that have no relation to the security of the product. The easier method is robustness testing, or fuzzing. It involves test automation technique that will generate abnormal inputs to the software under test to stimulate crash-level failures. Fuzzing has no false positives, a crash is a crash, and always serious.

Two methods of fuzzing

But be warned, not all fuzzing techniques are effective. The original meaning of fuzzing was to send random or semi-random inputs to the piece of software that you want to break. Random fuzzing can break things, and sometimes surprisingly is actually much more effective than many of the recent academic developments in intelligent fuzzing. But the most effective means of fuzzing comes from model-based testing techniques, where the test tools are taught the operation of the communication interfaces, the protocols including the syntax of messages and the state-machines that are followed in the message exchange. A thumb rule is that random fuzzing can find around 20-30% of the flaws that are in hiding, whereas model-based fuzzing finds more than 80% of the flaws.

The tools are out there to grab

Before 1999, fuzzing was an academic technique used only by few researchers, and a handful of hackers. But with freely available fuzz-test suites such as PROTOS (University of Oulu, 1999-2001) it quickly became a tool for all software developers. Although it was first used by people involved with development of operating systems and network protocols, web developers quickly also adapted the technique. In 2001, the first commercial tools emerged from companies such as Codenomicon and Cenzic. The tools are plentiful today. But note that whereas other tools are just point-and-shoot, requiring zero knowledge on the protocols or specialist security knowledge, many of the tools are just fuzzer development frameworks. These require that you build and maintain the tools yourself. Don't be scared from the first experiences; just look elsewhere for easier to use tools.

Who is using fuzzing?

Still, not all developers are aware of fuzzing tools, or just have chosen not to use them due to time limitations or budget restrictions. That is why the second wave of fuzzing users came from the system integration and service provider domain. Anyone who builds critical systems or networks is naturally interested in how reliable are the devices that they are using. Some even started using fuzzing as a procurement criteria, enforcing their vendors to use fuzzing before they even considered evaluating their products. With that, fuzzing turned from an R&D tool into security assessment tool. Majority of the users of fuzzers today come from any Enterprise environment, security aware IT staff and newly built product security teams risk assessment teams. Almost all self-respecting penetration testing service providers use at least some type of fuzzing in their assessments.

Fuzzing compliance

Still today, fuzzing is integrated to very few compliance assessments. A handful of product certification processes include even a mention of fuzzing or any similar testing technique. Fuzzing is not part of all penetration tests, and it is not used in all system integration tests. The compliance requirements that do require fuzzing are almost all proprietary requirements specifications by a range of Fortune-1000 Enterprises and telecommunication service providers. And without such requirements, we cannot expect every network to be secure. Zero-day threats will keep up emerging, and your people will be kept busy with the patch-and-penetrate race: Patch before it is too late. For most of you, security is still a reactive process. But I urge you not to close your eyes for the future. Security should be proactive, not reactive. The time of reactive security has come to an end.

Codenomicon is exhibiting at Infosecurity Europe 2009, the No. 1 industry event in Europe held on 28th - 30th April in its new venue Earl's Court, London. The event provides an unrivalled free education programme, exhibitors showcasing new and emerging technologies and offering practical and professional expertise.

Ari Takanen will be speaking at Infosec London this April on fuzzing, and how it is used in the Enterprise space. The talk is built around my recent book on fuzzing published last year by Artech House.

Ari Takanen, founder and CTO of Codenomicon, has since 1998 been researching information security issues in critical environments. His work at Codenomicon aims to ensure that new technologies are accepted by the general public by providing means of measuring and ensuring quality in networked software. Ari Takanen is one of the people behind PROTOS research that studied information security and reliability errors in numerous protocol implementations. His company, Codenomicon Ltd. provides automated tools with a systematic approach to test a multitude of interfaces on mission critical software. He is author of two books on VoIP security and on security testing.
Bookmark and Share
 
Home I Editor's Blog I News by Zone I News by Date I News by Category I Special Reports I Directory I Events I Advertise I Submit Your News I About Us I Guides
 
   © 2012 ProSecurityZone.com
Netgains Logo