During the development of software, software is tested for many attributes such as the correctness, completeness, security, and quality of developed computer software. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behavior of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of software quality assurance, which encompasses all business process areas, not just testing.
One type of software testing or analysis is static analysis (or static code analysis) which is the analysis of computer software that is performed without actually executing programs built from that software (analysis performed on executing programs is known as dynamic analysis). In most cases the analysis is performed on some version of the source code and in the other cases some form of the object code. The term is usually applied to the analysis performed by an automated tool, with human analysis being called program understanding or program comprehension. The sophistication of the analysis performed by tools varies from those that only consider the behavior of individual statements and declarations, to those that include the complete source code of a program in their analysis. Uses of the information obtained from the analysis vary from highlighting possible coding errors (e.g., the lint tool) to formal methods that mathematically prove properties about a given program (e.g., its behavior matches that of its specification).
There are many different types of static analyses which need to be performed on software during the course of design and development, such as Code Review, Architectural Discovery, Impact Analysis, Type State Analysis, etc. “Code Review” is systematic examination (often as peer review) of computer source code intended to find and fix mistakes overlooked in the initial development phase, improving overall code quality. Code reviews can often find and remove common security vulnerabilities such as format string attacks, race conditions, and buffer overflows, thereby improving software security. Online software repositories, like anonymous CVS, allow groups of individuals to collaboratively review code to improve software quality and security.
“Architectural Discovery” refers to the discovery and understanding of existing software architecture. Architects and developers often begin their work with existing code. They need to quickly review the application's structure and behavior prior to proceeding with new development as inherited applications often demonstrate execution performance problems or produce undesired side effects upon modification to the source if the existing architecture is not understood. These problems are often the result of developers unknowingly introducing unwanted dependencies during implementation, resulting in architectural decay.
There are many other types of analyses which the software architect or developer may wish to use on the software component or package—such as Deep Static Analysis, Type State Analysis, Impact Analysis, Runtime Data Analysis, etc. Each different analysis type provides the architect/developer with different data regarding the software component being tested/analyzed.
There are many static analysis vendors and tools available to handle such tests as Code Review, Architectural Discovery, Impact Analysis, Type State Analysis, etc. A listing of some of the vendors providing such tools can be found here: http://www.laatuk.com/tools/review_tools html.
Furthermore, besides the different forms of analyses that need to be done on the software during the design and development of software, the analyses, many times, need to be implemented on various specific domains such as Java, C++, HTML and so on. For instance, in today's environment, software is becoming more and more complex combining and mixing software components from various and disparate sources. A group in a company may develop a component to perform a particular task and a totally independent group from the same or a different company may develop a second component to perform a different task. Likewise, many organizations (e.g., open source organizations, such as SourceForge.net and the Apache Software Foundation) offer software components which perform discrete functions for no cost use by others. Obviously, it is highly attractive to utilize such well-known, well-tested components rather than building new components for performing the same functions from scratch. However, the existing components many times are written in different languages for different platforms (e.g., Java, C++, etc.) so that, in order to analyze the end software product having many different types of components having disparate resource types require tools which can perform the desired tests for the desired language and domain. The different languages, different platforms and different technologies are considered, for the purposes of this application, to be disparate resource types. This is especially true in software projects which conglomerate disparate resource types to pull together a single resulting software project, that is to say, most major software projects contain a mélange of different technologies and development languages (i.e., disparate resource types)
Presently, in order to perform analysis on a complete project (having many different technologies and development languages), several different tools are needed. Users must analyze projects using different tools and operating modes, which limits their ability to accurately assess complete projects. For instance, when a new development or analysis tool is introduced, productivity often initially takes a hit. The tool may be difficult to install, to configure, or to learn. This may result in the perception that the new tool is simply too difficult to adopt which slows development down.
Another problem is that, with existing analysis tools, because multiple tools need to be used for the multi-resource software project, the analysis must be done in a serial manner. That is, one test is configured by the architect/developer using a first tool so that the first tool performs the test in the manner which the architect/developer wishes it to be performed. The test is performed and the analysis results are obtained. Next, a second test is configured by the architect/developer utilizing a second tool so that the second tool performs the test in the manner which the architect/developer wishes it to be performed. The test is performed and the analysis results are obtained and so forth.
This is undesirable for a number of reasons. The first, most obvious reason is the time wasted—both the architect's/developer's time and the development cycle's time. In today's software world, development cycles are dramatically shorter than they were even 5 years ago so there is no time to waste. Of course, the architect's process of configure test 1, run test 1, wait, receive results 1, configure test 2, run test 2, wait, receive results 2, and so on has the wasted “wait” time. In addition, during the course of running test 1 and waiting for results, the architect may lose focus by trying to utilize the wasted time by multitasking on other issues. This of course causes unnecessary chum and overhead utilized while the architect tries to refocus on the task at hand. It would be desirable to have each test run in parallel or concurrently to alleviate these problems.
Another problem with using multiple tools to perform the various tests on the various platforms is that the different tools will have different user interfaces. This means that the architect or developer performing the analysis will need to become versed or knowledgeable in each of the tools, from initial set up, to configuring the analysis and to configuring how the results will be laid out. This consumes yet more time which is unnecessary and causes much frustration on the user.
In view of the foregoing, a need exists to overcome these problems by providing a system and method for providing simultaneous static analysis on disparate resource types and provides a unified results report making the analysis much easier on the user.