Software Testing – Types

There are many (a lot!) different types of software testing.  I won’t go into any of them in detail, but just wanted to familiarize myself with some of the terms and the basics.

  1. Installation testing – this ensures that the system is installed correctly and working properly at the user’s hardware location.
  2. Compatibility testing – this checks for correct operation between application software when systems or environments are upgraded from the original, possibly causing other pieces to not work correctly.
  3. Smoke testing – this is a minimal attempt to operate the software and is used to determine if there are any basic problems.
  4. Sanity testing – this determines if it is reasonable to go ahead with further (more in-depth) testing.
  5. Regression testing – this focuses on finding defects after a major code change has happened.
  6. Acceptance testing – this may be performed by the customer and is part of the hand-off process between phases of development.
  7. Alpha testing – this is a simulated or actual operational test by potential users
  8. Beta testing – this comes after alpha testing and may be a form of external user acceptance testing.
  9. Functional vs. non-functional testing – this is an activity to verify a specific action or function of the code (does this particular feature work?).
  10. Destructive testing – this attempts to force the software to fail and verifies that the software functions properly even when receiving unexpected inputs.
  11. Performance testing – this is used to determine how a system performs under specific workloads (responsiveness, large quantities of data, large number of users, etc), and there’s a whole set of sub-tests (load, volume, stress, stability – and sometimes these terms are used interchangeably).
  12. Usability testing – this looks at the user interface to see if it is easy to use and understand.
  13. Accessibility testing – this relates to standards associated with the Americans with Disabilities Act of 1990, Section 508 Amendment to the Rehabilitation Act of 1973, and the Web Accessibility Initiative (WAI)
  14. Security testing – checking for security!
  15. Internationalization and localization – this looks at translating software into different languages, keyboards, fonts, bi-directional text, and date/time formats.
  16. Development testing – this supports QA testing and is executed to eliminate construction errors prior to code going to QA.
  17. A/B testing – this is a comparison of two outputs, usually when only one variable has changed, typically used in small-scale situations.
  18. Concurrent testing – this focuses on the performance when running continuously with normal inputs.
  19. Conformance testing – this verifies that a product performs according to its standards.

Whew – what a list!  I really didn’t realize all of the different types of testing.  I think I intuitively have done some of these things before, but didn’t know that it had a specific name.

Software Testing – Levels

I wanted to continue exploring software testing.  This post will focus on levels of testing.  I’ve seen these terms used in different job descriptions that I have come across recently and I wanted to know a little more about them.  From what I have read, there are four levels of testing:

  • Unit testing
  • Integration testing
  • System testing
  • Acceptance testing

Unit testing can also be known as component testing.  This type of testing verifies the functionality of specific pieces of code, particularly at the function level.  Do the functions work the way they are supposed to?  If you are calculating sales tax, is the math correct?  Is it grabbing the right multipliers?  Is the answer in the correct format?  Does the function produce the expected result?

I found that unit tests can be written by the developers as they are working on the code (this was referred to as ‘white-box style’).  This type of testing focuses on the building blocks of the code and does not include whether all the building blocks work well together (this is leading to integration testing).  Unit testing is usually performed by the software developer during the construction phase of the software development life cycle (SDLC).  Instead of sending code to QA to test and then return to the developer multiple times (this does not seem very efficient), the developer will perform tests during construction in order to eliminate basic errors before the larger package of code is sent to QA.

Integration testing is the next step up from unit testing.  It involves testing multiple components in a software product to make sure that they work well together.  Some components may be tested and then additional components added to increase the level or depth of the tests.  This is an iterative approach.  Or you could use the ‘big bang’ approach where all components are lumped together for testing all interfaces and all levels of integration at once.  It sounds like the iterative approach is favored, since it would be easier to identify issues among a smaller number of components.

System testing is also known as ‘end-to-end testing’ and involves tests over the entire software product or system.  This level of testing looks at how the overall product works as well as checking to make sure that the software product does not interfere with the operating environment or other processes within that environment.  Memory corruption, high loads on servers, and slow response times could be indications of the software product interfering with the overall operation environment.

Acceptance testing occurs when the system is delivered to the user and determines whether the user (or customer/client) ‘accepts’ the system.

I know this may seem like a basic topic to some, but it has really been helpful for me to research and learn about these terms and concepts.  Now, when I read articles or job descriptions, I’ll have a much better idea of what they are talking about.

Testing, testing…

Currently, I am involved with a lot of testing of the software product at my work. This includes trying to duplicate errors to find the source of the problem, documenting bugs and bug fixes, and adding enhancements. Enhancements don’t necessarily ‘fix’ something that is broken, but they make the user experience (UX) better. I have read about testing methodologies before, but never really developed a standard or a specific process, at least not one that I could define. My approach to testing usually includes these questions: what is it supposed to do? does it do that? Yes. OK, good. No. OK, keep testing.

As I’ve been working on a lot of testing, I felt the need to develop some processes around it, some standard questions that I would ask at certain points in the testing process, and even some key pointers. If something (a button, for instance) doesn’t produce the desired results, what is connected to that button? What is supposed to happen? What actually happens?

So, I did a quick Google search and found a wealth of information on software testing methods. I think I’ve been using some of these methods, but just haven’t had the official terms to go along with my process.

Wikipedia provides a large amount of information on software testing. And, did you know there is even an Association for Software Testing? Microsoft provides some great information too. There is also a great website dedicated to software testing – Software Testing Fundamentals. Their About page explains their mission and purpose:
“Software Testing Fundamentals is a platform to gain (or refresh) basic knowledge in the field of Software Testing. The site consists of several articles that have been collected from various resources and experiences. If we are to ‘cliche’ it, the site is of the testers, by the testers, and for the testers. Our goal is to build a resourceful repository of Quality Content on Quality.”
They also have a page devoted to jokes about software testing! I like these guys!

The Definition:
I think that software testing is the investigation into the quality of the software product. The result of this investigation can be used to determine if the product meets the original goals under which it was designed, responds as expected based on both correct and incorrect inputs, and produces the desired results in an appropriate amount of time. Testing involves using a program and evaluating the results. There are testing strategies for both Agile development and traditional phased development.

Oftentimes, testing is focused around finding bugs or defects. Fixing these bugs or defects ultimately improves the quality, and therefore value, of the software product. One statement that I particularly liked on Wikipedia is: “Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions.[5] I also think it is important to look at your target audience. Who is going to be using your software? Is it a video game or an accounting product? Knowing your audience, and programming to that end, can help make their experience better resulting in a positive review of the software product.

Besides just executing the software to observe and detect bugs, it seems helpful to examine the code also. Does the error occur as the button is pressed or after the button has been pressed? What functions are connected to the button? Are there validation scripts that run when the button is pressed or when the text field has been changed? Those subtle difference could be key in identifying the exact code that causes the bug.

Some bugs or defects occur because of coding errors, but oftentimes bugs develop because requirements for the product change during development. This causes a gap – what has been coded up to this point was based on a particular set of requirements. If the requirements change, the way the product is coded changes. Sometimes bugs are not noticed until the environment is changed. I noticed this when testing software for a product used in the U.S. and in Europe. The date format in the U.S. is mm-dd-yyyy, while the format in Europe is dd-mm-yyyy. In order for the user to experience the software seamlessly whether in Europe or the U.S., the day, month, and year had to be coded in separate fields. If the software is used in the U.S., a function combines those individual fields in the order of month, day, and year. If the software is used in Europe, another function combines those individual fields in the order of day, month, and year. During initial testing, only the U.S. date format was tested (prior to my arrival here), and it seemed to work just fine. However, when switching to European date format, the functions ended up causing infinite Ajax callbacks and some weird cursor activity between the date fields. The software was not originally designed to handle the different date formats and functions were added to try to accommodate the new requirement.

The Cost of Bugs:
There was some mention of the cost of fixing software bugs on Wikipedia, but it was a bit sketchy and not well documented. I believe there are large dollar costs involved with releasing a finished product and then identifying, testing, and fixing bugs, and releasing a patch to fix those bugs. There are also psychological or emotional costs involved – there may be negative feedback or a negative view of your organization when the user (collectively) decides that they can’t trust your product. I’m thinking about the website when it was first released. I think the news had a field day with reporting on how the government website was hacked, leading to a feeling of public distrust about the program overall, not just the website. It seems like it would be best to design and architect your software well from the beginning to avoid or reduce the number of bugs that occur. Before widespread release of your software, adequate testing would be invaluable. Knowing that you probably can’t test for every single possible error that could occur (bugs do happen), there should be some process for reporting and fixing bugs and then communicating about upgrades to your users.

Key Things I’ve Learned:

  • Observation!
  • Only change ONE thing at a time!
  • Ask Why!
  • Check the details!
  • DOCUMENT what you tried AND what the results were!