Software Testing – Tools

Software testing tools – just the bare-bones basics…

  1. Test plan – this is a test specification.  Knowing that a car will be tested to drive at a speed of 60 miles per hour helps the developers of the car to design it in such a way that it does indeed drive at a speed of 60 miles per hour.
  2. Test script – this is a piece of programming code that replicates user actions.
  3. Traceability matrix – this is a table that is used to change tests when source documents are changed
  4. Test case – this is a defined input with an expected result.
  5. Test suite – this is a term used for the collection of test cases, often including more detailed instructions or goals.
  6. Test harness – software, tools, samples of data input and output, and configurations are collectively known as a test harness.

Software Testing – Types

There are many (a lot!) different types of software testing.  I won’t go into any of them in detail, but just wanted to familiarize myself with some of the terms and the basics.

  1. Installation testing – this ensures that the system is installed correctly and working properly at the user’s hardware location.
  2. Compatibility testing – this checks for correct operation between application software when systems or environments are upgraded from the original, possibly causing other pieces to not work correctly.
  3. Smoke testing – this is a minimal attempt to operate the software and is used to determine if there are any basic problems.
  4. Sanity testing – this determines if it is reasonable to go ahead with further (more in-depth) testing.
  5. Regression testing – this focuses on finding defects after a major code change has happened.
  6. Acceptance testing – this may be performed by the customer and is part of the hand-off process between phases of development.
  7. Alpha testing – this is a simulated or actual operational test by potential users
  8. Beta testing – this comes after alpha testing and may be a form of external user acceptance testing.
  9. Functional vs. non-functional testing – this is an activity to verify a specific action or function of the code (does this particular feature work?).
  10. Destructive testing – this attempts to force the software to fail and verifies that the software functions properly even when receiving unexpected inputs.
  11. Performance testing – this is used to determine how a system performs under specific workloads (responsiveness, large quantities of data, large number of users, etc), and there’s a whole set of sub-tests (load, volume, stress, stability – and sometimes these terms are used interchangeably).
  12. Usability testing – this looks at the user interface to see if it is easy to use and understand.
  13. Accessibility testing – this relates to standards associated with the Americans with Disabilities Act of 1990, Section 508 Amendment to the Rehabilitation Act of 1973, and the Web Accessibility Initiative (WAI)
  14. Security testing – checking for security!
  15. Internationalization and localization – this looks at translating software into different languages, keyboards, fonts, bi-directional text, and date/time formats.
  16. Development testing – this supports QA testing and is executed to eliminate construction errors prior to code going to QA.
  17. A/B testing – this is a comparison of two outputs, usually when only one variable has changed, typically used in small-scale situations.
  18. Concurrent testing – this focuses on the performance when running continuously with normal inputs.
  19. Conformance testing – this verifies that a product performs according to its standards.

Whew – what a list!  I really didn’t realize all of the different types of testing.  I think I intuitively have done some of these things before, but didn’t know that it had a specific name.

Software Testing – Levels

I wanted to continue exploring software testing.  This post will focus on levels of testing.  I’ve seen these terms used in different job descriptions that I have come across recently and I wanted to know a little more about them.  From what I have read, there are four levels of testing:

  • Unit testing
  • Integration testing
  • System testing
  • Acceptance testing

Unit testing can also be known as component testing.  This type of testing verifies the functionality of specific pieces of code, particularly at the function level.  Do the functions work the way they are supposed to?  If you are calculating sales tax, is the math correct?  Is it grabbing the right multipliers?  Is the answer in the correct format?  Does the function produce the expected result?

I found that unit tests can be written by the developers as they are working on the code (this was referred to as ‘white-box style’).  This type of testing focuses on the building blocks of the code and does not include whether all the building blocks work well together (this is leading to integration testing).  Unit testing is usually performed by the software developer during the construction phase of the software development life cycle (SDLC).  Instead of sending code to QA to test and then return to the developer multiple times (this does not seem very efficient), the developer will perform tests during construction in order to eliminate basic errors before the larger package of code is sent to QA.

Integration testing is the next step up from unit testing.  It involves testing multiple components in a software product to make sure that they work well together.  Some components may be tested and then additional components added to increase the level or depth of the tests.  This is an iterative approach.  Or you could use the ‘big bang’ approach where all components are lumped together for testing all interfaces and all levels of integration at once.  It sounds like the iterative approach is favored, since it would be easier to identify issues among a smaller number of components.

System testing is also known as ‘end-to-end testing’ and involves tests over the entire software product or system.  This level of testing looks at how the overall product works as well as checking to make sure that the software product does not interfere with the operating environment or other processes within that environment.  Memory corruption, high loads on servers, and slow response times could be indications of the software product interfering with the overall operation environment.

Acceptance testing occurs when the system is delivered to the user and determines whether the user (or customer/client) ‘accepts’ the system.

I know this may seem like a basic topic to some, but it has really been helpful for me to research and learn about these terms and concepts.  Now, when I read articles or job descriptions, I’ll have a much better idea of what they are talking about.

Software Testing – Methods

Continuing along with the theme of testing, I wanted to explore the various methods used for software testing. I found that there are a variety of approaches when it comes to software testing. Both static and dynamic testing work to improve the quality of the final software product.

  • Static Testing includes reviews, walk-throughs, or inspections of the software (or sections of it). It can include proofreading or using programming tools (i.e. text editors or compilers) to check the source code structure or syntax and data flow. Static testing revolves around verification.
  • Dynamic Testing is well, dynamic. The program is actually executed or run with a test case (or a set of test cases), and oftentimes in a debugger environment. It may also include testing just small pieces of the program before the entire program has been completed. Dynamic testing involves validation.

The ‘box approach’ includes the white-box and black-box testing methods:

  • White-Box testing
  • Black-box testing

Other testing methods include:

  • Visual testing
  • Grey-box testing

White-Box testing looks at testing internal structures (different from what the end user experiences). This is also known as clear box testing, glass box testing, transparent box testing and structural testing. It would be similar to testing nodes in a circuit (in-circuit testing). There are different techniques used in white-box testing – API testing, code coverage, fault injection, mutation testing, function coverage, and statement coverage. Perhaps one day, I will explore these individually and write in greater detail. My goal with this series was really to just learn the basics about some official methods of testing, which I had not even realized existed up to this point. My approach to testing started out haphazardly (does it work? why not? try this.) and I wanted to try to standardize my approach.

Black-box testing looks at examining the software functionality without knowing any of the internal workings of the software. Testers are only looking at what the software is supposed to do, but not how it functions internally. There are many different methods available, including: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing and specification-based testing. Thank you Wikipedia. A tester need not possess any programming knowledge because they are only looking at the external functionality. However, because they are only looking at the external functions, it may be possible to skip over testing some parts of the program.

Specification-based testing, as mentioned above, looks at testing in alignment with the software requirements. One simple example would be testing the addition function on a calculator. By entering 2 + 2 (and knowing that the expected answer would be 4), you could verify that the program functions as expected.

Visual testing involves video recording or observing the testing process in order to witness the point where the software performs unexpectedly. This allows developers to see exactly what the tester did (what button was pushed, what browser, the timing of clicks, etc) in order to create the failure, and eliminates or lowers the need for developers to replicate the failure, saving time and allowing them to focus on a solution. I experienced this myself last week when I was unable to replicate a bug that had been reported. After a conference call and a ‘GoToMeeting’ (in order to share the computer screen), I was able to witness the timing involved with our testing staff and see the error that I had been unable to replicate. Her timing was described as ‘regular’ on the bug tracking record (clicking from field to field), whereas I would describe her timing as ‘extremely slow’ based on watching the screen as she clicked through fields. That made a big difference for me to be able to replicate the error, and ultimately find the cause and develop a solution or fix.

Gray-box testing (or Grey, depending on which side of the pond you are on) is something I work with pretty often right now, although I never knew an official name for it previously. It includes having some knowledge of the background functions and structure, particularly the supporting database. A tester could perform a test on the software at the user interface level and execute SQL queries before and after the test to verify that the appropriate change took place at the database level. This type of testing may help the tester to design more accurate tests.

My sources:
Wikipedia provides a large amount of information on software testing. And, did you know there is even an Association for Software Testing? Microsoft provides some great information too. There is also a great website dedicated to software testing – Software Testing Fundamentals.