Enterprise Architect  
 
 

Is Your Application Security Up to Spec?
Get the lowdown on an effective strategy for defending application security.
by Alex Smolen

June 23, 2005

Legislation such as the Gramm-Leach Bliley Act and the Sarbanes-Oxley Act has generated much interest and anxiety regarding corporate security. Essentially, this legislation mandates security by threatening steep fines and requiring information disclosure for any company that allows security breaches to impact private data. However, the recent legislation does not explain how to secure this data, and companies are struggling to solve the information security problem.

A typical first step is to examine the network infrastructure for security vulnerabilities. This is probably not the best approach, considering that the majority of exploits are not at the network level. In fact, according to Gartner, 75 percent of the malicious hackers are walking through the front doors—the insecure applications that companies use to access their data.

Unfortunately for organizations faced with mandates for security, application security is not only critical but also difficult to achieve. Application security has steadily ascended in priority as a requirement for enterprise software, growing from a customer trust issue to a serious matter of legal culpability. As CSOs and CIOs redouble their efforts to formulate a security solution that is both cost-effective and compatible with existing business models, the myriad of offerings, methodologies, and guidelines can seem overwhelming and untenable. It is exceedingly difficult to find a comprehensive, manageable solution to what is by nature an unclear problem.

From this standpoint, asking "Is your enterprise application secure?" is a rhetorically vague question. Because the overarching goal of security is to identify, evaluate, and mitigate risk, attempting to achieve complete security without an idea of what systems to secure and what threats to defend against is likely to lead to blown budgets or, worse, an exploitation of a thinly applied security solution. To get a grasp on an application's security (or vulnerability), an organization needs to understand and communicate what threats are most important to guard against and what defensive strategies are necessary.

This article explores the general challenges associated with application security, then introduces one effective strategy for defending application security: integrating application security into the development lifecycle by defining and enforcing a security policy.

Application Security Challenges
One of the challenges associated with safeguarding application security is establishing who should be responsible for it, and what this responsibility entails. If security exploits are three times more likely to occur on the application level than on the network level, is a developer three times more likely than a network administrator to be responsible for a security breach?

This question should merit serious consideration about the lack of emphasis placed on secure development. Any competent system administrator knows that systems need to be configured for security, patched regularly, and protected from hackers. It is becoming increasingly evident that developers, architects, and testers need to learn about security to deliver successful and compliant applications. Furthermore, their efforts need to be coordinated so that security mechanisms are consistent and centralized.

In an effort to prevent security breaches and the negative publicity that accompany them, some organizations are turning to external security experts, who are asked to "ethically hack" the organization's application to find any vulnerabilities preemptively. Although this strategy can be useful as a kind of smoke test to determine if any flaws have been overlooked, it is fundamentally treating the symptoms rather than addressing the cause. The management of application security is neither effective nor efficient when it is applied like aftermarket protective coating. Instead, it should involve controls to prevent vulnerabilities from creeping into applications, as well as efforts to make security a key concern during the design and testing phases.

Another challenge is defining exactly what constitutes a "secure application." It has been said that any computer program that has no bugs is trivial by definition, and as a contrapositive, any program that is not trivial must contain bugs. This theory might be accepted only by those who have written and fixed countless bugs, but it is a key concept to consider when discussing application security. Any application should be considered to have vulnerabilities until proven otherwise, and proving the absence of vulnerabilities is much harder than proving the presence of vulnerabilities. So it doesn't make sense to claim to have a completely secure application or to assume that software is secure by default. A more attainable goal of application security is to have a documented and enforced awareness built into the process.

Implementing a Defensive Strategy
Once you determine your security goals and who should be responsible for making those goals a reality, the next challenge at hand is how to make the application secure. One effective way to do this is to integrate application security into the complete development team (including developers, architects, and testers) and complete software development lifecycle by defining and enforcing a security policy.

Security, like quality and other key software development initiatives, is most successful when implemented as part of a controlled process. Throughout the evolution of software development, as projects have became larger and more complex, there has been increasing emphasis placed on processes controlling the chaos that large, disparate, enterprise applications can devolve into. Concepts such as test-driven development, extreme programming, and design patterns are all attempts to define constraints that ensure attributes such as quality, reusability, and maintainability.

When these techniques are codified in a software development process, the initial costs of documentation and reorganization are balanced by the clarity and predictability gained by following these models. This same approach is valuable when dealing with application security because the risks lie in the unknown, obscured, and untested areas of the code, and a systematic methodology of defining, locating, and fixing potential vulnerabilities significantly improves the security of the finished product.

The first step in integrating security into the team's development process is establishing a security policy. To start, educate everyone involved in an enterprise application development project regarding the potential security issues and convince everyone to adhere to standards that guarantee a certain level of security. These standards differ depending on roles; an architect might have to ensure reliable nonrepudiation strategies for transactions, whereas a developer might have to perform unit testing and use only standard trusted libraries. However, every company and application is different, and you should consult security experts while developing these practices.

After determining a robust set of security regulations, you should draft a security policy that explicitly lists out each rule and how it will be enforced. The policy should focus on the steps taken to ensure the security of the application and function as requirements documents that guide secure development. Application security concerns tend to revolve around a common set of issues; subsequently, you can typically extend and reuse security policies.

Having a carefully constructed security policy will simplify the security-related decisions that should be made throughout the software development lifecycle. For example, a developer security policy might list that only secure cryptographic libraries will be used; because many developers are not reading the latest news on breaking cryptographic ciphers, this policy simplifies their job by facilitating the decision of not using a weak or custom encryption method.

A security policy for architects might require that input must be validated according to certain guidelines. By having a design that centralizes and consistently applies input validation, the unexpected behavior associated with vulnerabilities caused by unchecked user input is minimized without requiring developers to write hundreds of checks every time they parse an account number.

The testing or quality assurance security policy could mandate the performance of certain types of tests that look for security vulnerabilities. To comply with the policy, testers would need not only to determine whether the application behaves correctly but also to verify that the application doesn't behave incorrectly when being attacked.

Enforcing the Policy
Once an effective policy is established, it should serve as a centralized repository for managing secure coding standards, design decisions, test cases, and configuration practices. It should become prescriptive for those working on the project and descriptive for those managing the project. The document should drive the expected security measures for the application throughout the development lifecycle so that problems are found earlier, when fixing them is easier, faster, and less costly. Research shows that bugs found earlier in the development cycle cost exponentially less than those found by users after release/deployment. This is especially true with security vulnerabilities, which can cause major damage to both financial assets and corporate reputations.

To reap the potential benefits of having a security policy, the required practices must be integrated into the development lifecycle and adhered to by everyone involved on the project. This is fundamentally important—once the policy is created, it must be implemented consistently. The records associated with the enforcement of the security policy, along with the security policy itself, are the kinds of artifacts necessary to comply with standards that have auditing requirements.

The security policy can be disseminated to all members of the software team as a shared strategy for ensuring the security of an enterprise application. Problems identified during development should be addressed before the code is added to the team's shared code base, and problems identified by testers can be reported to the developers and added to regression tests to ensure that the vulnerability doesn't creep back into the code.

Depending on the size of the organization and the security aptitude of the testers, this testing might need to be augmented by penetration tests from outside consultants. However, whatever can be accomplished internally means less time and money spent fixing problems found during these tests and fewer repeat visits.

Most application vulnerabilities are derived from well-understood attack vectors, and a security policy is the articulation of the method for dealing with these issues—as well as guarding against new threats. By enforcing the security policy, you can identify common vulnerabilities and fix them earlier, and isolate the areas of high risk into heavily protected "drawbridges." If you have a security policy and enforce it, the question "Is your application secure?" becomes moot; "Is your application security up to spec?" is a question that's more realistic, useful, and—most importantly—answered.

About the Author
Alex Smolen is Security Solutions Manager at Parasoft. He spoke at FTP's Enterprise Architect Summit conference in May.