Software Testing – Levels

I wanted to continue exploring software testing.  This post will focus on levels of testing.  I’ve seen these terms used in different job descriptions that I have come across recently and I wanted to know a little more about them.  From what I have read, there are four levels of testing:

  • Unit testing
  • Integration testing
  • System testing
  • Acceptance testing

Unit testing can also be known as component testing.  This type of testing verifies the functionality of specific pieces of code, particularly at the function level.  Do the functions work the way they are supposed to?  If you are calculating sales tax, is the math correct?  Is it grabbing the right multipliers?  Is the answer in the correct format?  Does the function produce the expected result?

I found that unit tests can be written by the developers as they are working on the code (this was referred to as ‘white-box style’).  This type of testing focuses on the building blocks of the code and does not include whether all the building blocks work well together (this is leading to integration testing).  Unit testing is usually performed by the software developer during the construction phase of the software development life cycle (SDLC).  Instead of sending code to QA to test and then return to the developer multiple times (this does not seem very efficient), the developer will perform tests during construction in order to eliminate basic errors before the larger package of code is sent to QA.

Integration testing is the next step up from unit testing.  It involves testing multiple components in a software product to make sure that they work well together.  Some components may be tested and then additional components added to increase the level or depth of the tests.  This is an iterative approach.  Or you could use the ‘big bang’ approach where all components are lumped together for testing all interfaces and all levels of integration at once.  It sounds like the iterative approach is favored, since it would be easier to identify issues among a smaller number of components.

System testing is also known as ‘end-to-end testing’ and involves tests over the entire software product or system.  This level of testing looks at how the overall product works as well as checking to make sure that the software product does not interfere with the operating environment or other processes within that environment.  Memory corruption, high loads on servers, and slow response times could be indications of the software product interfering with the overall operation environment.

Acceptance testing occurs when the system is delivered to the user and determines whether the user (or customer/client) ‘accepts’ the system.

I know this may seem like a basic topic to some, but it has really been helpful for me to research and learn about these terms and concepts.  Now, when I read articles or job descriptions, I’ll have a much better idea of what they are talking about.

Color Schemes

I am developing a new website in .NET using C# and the MVC model to highlight some of the web/coding projects that I have worked on over the past couple of years.  I wanted to share this awesome website I found for developing color schemes.  Check out Paletton – you can choose monochromatic, adjacent 3-color, triad 3-color, tetrad 4-color, and free-style 4-color.  You can also choose the lightness or darkness of the colors.

So, now I have a color scheme for my new site, which I’ll be adding to my CSS this weekend.  And hopefully, I will be able to unveil my new site soon!  Stay tuned.  🙂

Software Testing – Methods

Continuing along with the theme of testing, I wanted to explore the various methods used for software testing. I found that there are a variety of approaches when it comes to software testing. Both static and dynamic testing work to improve the quality of the final software product.

  • Static Testing includes reviews, walk-throughs, or inspections of the software (or sections of it). It can include proofreading or using programming tools (i.e. text editors or compilers) to check the source code structure or syntax and data flow. Static testing revolves around verification.
  • Dynamic Testing is well, dynamic. The program is actually executed or run with a test case (or a set of test cases), and oftentimes in a debugger environment. It may also include testing just small pieces of the program before the entire program has been completed. Dynamic testing involves validation.

The ‘box approach’ includes the white-box and black-box testing methods:

  • White-Box testing
  • Black-box testing

Other testing methods include:

  • Visual testing
  • Grey-box testing

White-Box testing looks at testing internal structures (different from what the end user experiences). This is also known as clear box testing, glass box testing, transparent box testing and structural testing. It would be similar to testing nodes in a circuit (in-circuit testing). There are different techniques used in white-box testing – API testing, code coverage, fault injection, mutation testing, function coverage, and statement coverage. Perhaps one day, I will explore these individually and write in greater detail. My goal with this series was really to just learn the basics about some official methods of testing, which I had not even realized existed up to this point. My approach to testing started out haphazardly (does it work? why not? try this.) and I wanted to try to standardize my approach.

Black-box testing looks at examining the software functionality without knowing any of the internal workings of the software. Testers are only looking at what the software is supposed to do, but not how it functions internally. There are many different methods available, including: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing and specification-based testing. Thank you Wikipedia. A tester need not possess any programming knowledge because they are only looking at the external functionality. However, because they are only looking at the external functions, it may be possible to skip over testing some parts of the program.

Specification-based testing, as mentioned above, looks at testing in alignment with the software requirements. One simple example would be testing the addition function on a calculator. By entering 2 + 2 (and knowing that the expected answer would be 4), you could verify that the program functions as expected.

Visual testing involves video recording or observing the testing process in order to witness the point where the software performs unexpectedly. This allows developers to see exactly what the tester did (what button was pushed, what browser, the timing of clicks, etc) in order to create the failure, and eliminates or lowers the need for developers to replicate the failure, saving time and allowing them to focus on a solution. I experienced this myself last week when I was unable to replicate a bug that had been reported. After a conference call and a ‘GoToMeeting’ (in order to share the computer screen), I was able to witness the timing involved with our testing staff and see the error that I had been unable to replicate. Her timing was described as ‘regular’ on the bug tracking record (clicking from field to field), whereas I would describe her timing as ‘extremely slow’ based on watching the screen as she clicked through fields. That made a big difference for me to be able to replicate the error, and ultimately find the cause and develop a solution or fix.

Gray-box testing (or Grey, depending on which side of the pond you are on) is something I work with pretty often right now, although I never knew an official name for it previously. It includes having some knowledge of the background functions and structure, particularly the supporting database. A tester could perform a test on the software at the user interface level and execute SQL queries before and after the test to verify that the appropriate change took place at the database level. This type of testing may help the tester to design more accurate tests.

My sources:
Wikipedia provides a large amount of information on software testing. And, did you know there is even an Association for Software Testing? Microsoft provides some great information too. There is also a great website dedicated to software testing – Software Testing Fundamentals.

Testing, testing…

Currently, I am involved with a lot of testing of the software product at my work. This includes trying to duplicate errors to find the source of the problem, documenting bugs and bug fixes, and adding enhancements. Enhancements don’t necessarily ‘fix’ something that is broken, but they make the user experience (UX) better. I have read about testing methodologies before, but never really developed a standard or a specific process, at least not one that I could define. My approach to testing usually includes these questions: what is it supposed to do? does it do that? Yes. OK, good. No. OK, keep testing.

As I’ve been working on a lot of testing, I felt the need to develop some processes around it, some standard questions that I would ask at certain points in the testing process, and even some key pointers. If something (a button, for instance) doesn’t produce the desired results, what is connected to that button? What is supposed to happen? What actually happens?

So, I did a quick Google search and found a wealth of information on software testing methods. I think I’ve been using some of these methods, but just haven’t had the official terms to go along with my process.

Wikipedia provides a large amount of information on software testing. And, did you know there is even an Association for Software Testing? Microsoft provides some great information too. There is also a great website dedicated to software testing – Software Testing Fundamentals. Their About page explains their mission and purpose:
“Software Testing Fundamentals is a platform to gain (or refresh) basic knowledge in the field of Software Testing. The site consists of several articles that have been collected from various resources and experiences. If we are to ‘cliche’ it, the site is of the testers, by the testers, and for the testers. Our goal is to build a resourceful repository of Quality Content on Quality.”
They also have a page devoted to jokes about software testing! I like these guys!

The Definition:
I think that software testing is the investigation into the quality of the software product. The result of this investigation can be used to determine if the product meets the original goals under which it was designed, responds as expected based on both correct and incorrect inputs, and produces the desired results in an appropriate amount of time. Testing involves using a program and evaluating the results. There are testing strategies for both Agile development and traditional phased development.

Oftentimes, testing is focused around finding bugs or defects. Fixing these bugs or defects ultimately improves the quality, and therefore value, of the software product. One statement that I particularly liked on Wikipedia is: “Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions.[5] I also think it is important to look at your target audience. Who is going to be using your software? Is it a video game or an accounting product? Knowing your audience, and programming to that end, can help make their experience better resulting in a positive review of the software product.

Besides just executing the software to observe and detect bugs, it seems helpful to examine the code also. Does the error occur as the button is pressed or after the button has been pressed? What functions are connected to the button? Are there validation scripts that run when the button is pressed or when the text field has been changed? Those subtle difference could be key in identifying the exact code that causes the bug.

Some bugs or defects occur because of coding errors, but oftentimes bugs develop because requirements for the product change during development. This causes a gap – what has been coded up to this point was based on a particular set of requirements. If the requirements change, the way the product is coded changes. Sometimes bugs are not noticed until the environment is changed. I noticed this when testing software for a product used in the U.S. and in Europe. The date format in the U.S. is mm-dd-yyyy, while the format in Europe is dd-mm-yyyy. In order for the user to experience the software seamlessly whether in Europe or the U.S., the day, month, and year had to be coded in separate fields. If the software is used in the U.S., a function combines those individual fields in the order of month, day, and year. If the software is used in Europe, another function combines those individual fields in the order of day, month, and year. During initial testing, only the U.S. date format was tested (prior to my arrival here), and it seemed to work just fine. However, when switching to European date format, the functions ended up causing infinite Ajax callbacks and some weird cursor activity between the date fields. The software was not originally designed to handle the different date formats and functions were added to try to accommodate the new requirement.

The Cost of Bugs:
There was some mention of the cost of fixing software bugs on Wikipedia, but it was a bit sketchy and not well documented. I believe there are large dollar costs involved with releasing a finished product and then identifying, testing, and fixing bugs, and releasing a patch to fix those bugs. There are also psychological or emotional costs involved – there may be negative feedback or a negative view of your organization when the user (collectively) decides that they can’t trust your product. I’m thinking about the Healthcare.gov website when it was first released. I think the news had a field day with reporting on how the government website was hacked, leading to a feeling of public distrust about the program overall, not just the website. It seems like it would be best to design and architect your software well from the beginning to avoid or reduce the number of bugs that occur. Before widespread release of your software, adequate testing would be invaluable. Knowing that you probably can’t test for every single possible error that could occur (bugs do happen), there should be some process for reporting and fixing bugs and then communicating about upgrades to your users.

Key Things I’ve Learned:

  • Observation!
  • Only change ONE thing at a time!
  • Ask Why!
  • Check the details!
  • DOCUMENT what you tried AND what the results were!