Skip to main content

What is Software Testing?

Software testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. [Hetzel88] Although crucial to software quality and widely deployed by programmers and testers, software testing still remains an art, due to limited understanding of the principles of software. The difficulty in software testing stems from the complexity of software: we can not completely test a program with moderate complexity. Testing is more than just debugging. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality.

Software Testing is the process of executing a program or system with the intent of finding errors. [Myers79] Or, it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. [Hetzel88] Software is not unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for software is generally infeasible. [Rstcorp]
Unlike most physical systems, most of the defects in software are design errors, not manufacturing defects. Software does not suffer from corrosion, wear-and-tear -- generally it will not change until upgrades, or until obsolescence. So once the software is shipped, the design defects -- or bugs -- will be buried in and remain latent until activation.
Software bugs will almost always exist in any software module with moderate size: not because programmers are careless or irresponsible, but because the complexity of software is generally intractable -- and humans have only limited ability to manage complexity. It is also true that for any complex systems, design defects can never be completely ruled out.
Discovering the design defects in software, is equally difficult, for the same reason of complexity. Because software and any digital systems are not continuous, testing boundary values are not sufficient to guarantee correctness. All the possible values need to be tested and verified, but complete testing is infeasible. Exhaustively testing a simple program to add only two integer inputs of 32-bits (yielding 2^64 distinct test cases) would take hundreds of years, even if tests were performed at a rate of thousands per second. Obviously, for a realistic software module, the complexity can be far beyond the example mentioned here. If inputs from the real world are involved, the problem will get worse, because timing and unpredictable environmental effects and human interactions are all possible input parameters under consideration.
A further complication has to do with the dynamic nature of programs. If a failure occurs during preliminary testing and the code is changed, the software may now work for a test case that it didn't work for previously. But its behavior on pre-error test cases that it passed before can no longer be guaranteed. To account for this possibility, testing should be restarted. The expense of doing this is often prohibitive. [Rstcorp]
An interesting analogy parallels the difficulty in software testing with the pesticide, known as the Pesticide Paradox [Beizer90]: Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual. But this alone will not guarantee to make the software better, because the Complexity Barrier [Beizer90] principle states: Software complexity(and therefore that of bugs) grows to the limits of our ability to manage that complexity. By eliminating the (previous) easy bugs you allowed another escalation of features and complexity, but his time you have subtler bugs to face, just to retain the reliability you had before. Society seems to be unwilling to limit complexity because we all want that extra bell, whistle, and feature interaction. Thus, our users always push us to the complexity barrier and how close we can approach that barrier is largely determined by the strength of the techniques we can wield against ever more complex and subtle bugs. [Beizer90]

Comments

Popular posts from this blog

Mantis - Defect Management Tool - User Guide

Introduction This white paper contains information related to an open source defect management tool called Mantis which is freely available for managing all your defects. Where to Download One can download the latest version of Mantis from http://sourceforge.net/project/showfiles.php?group_id=14963 Download the “mantis-stable” and then “mantis-x.y.z.zip”.1.1 PRE-REQUISITE Operating System Windows 2003 Server, MacOS, OS/2, Linux, Solaris Database MySQL database 3.23.2 and higher Software PHP 4.0.6 and higher Application Server Apache 1.3, Apache 2.0.54, IIS 6.0 with Web Services Extension Browser IE6 and above, Mozilla Firefox STEPS FOR INSTALLATION – FOR WINDOWS MySQL Installation Install MySQL by downloading it from http://dev.mysql.com/get/Downloads/MySQLGUITools/mysql-gui-tools-5.0-r6-win32.msi/from/http:/www.mirrorservice.org/sites/ftp.mysql.com/ . Please follow the below mentioned steps while installing MySQL: (Please remember the password for “root” account...

Why to do Software Testing?

Regardless of the limitations, testing is an integral part in software development. It is broadly deployed in every phase in the software development cycle. Typically, more than 50% percent of the development time is spent in testing. Testing is usually performed for the following purposes: To improve quality. As computers and software are used in critical applications, the outcome of a bug can be severe. Bugs can cause huge losses. Bugs in critical systems have caused airplane crashes, allowed space shuttle missions to go awry, halted trading on the stock market, and worse. Bugs can kill. Bugs can cause disasters. The so-called year 2000 (Y2K) bug has given birth to a cottage industry of consultants and programming tools dedicated to making sure the modern world doesn't come to a screeching halt on the first day of the next century. [Bugs] In a computerized embedded world, the quality and reliability of software is a matter of life and death. Quality means the conformance to the ...

Is BCI (Brain Computer Interface) a road to immortality?

What is a Brain-Computer Interface? Brain-computer interface is a technology which can read your mind and display it on a device - a mobile, laptop or TV screen. It means what data is being carried by our billion neurons are prone to hack.  When combined with other mechanical devices, can also be used to control the device.  So in future, if you want to switch on your TV or light, you just have to think and that's all.  Google and Alexa home will be replaced by BMI controlled devices.  I am not an expert in this field but based on my research and analysis, I can predict that when BMI tech will get matured, the human can do brain to brain talk and many other magical things - in future Telepathy will be a day to day affair. BCI can give you all kind of experience from the comfort of your home.  You can go on vacation without leaving your bedroom. You can have romantic experience without having a partner and many more such experiences. ...