Monday, May 19, 2014

Validation, Verification and Testing

Validation, Verification and Testing - these are different terms that defines different meanings.

Testing desktop/ web application versus testing software/firmware on medical device, I get to follow different methods/ procedure. Even though in broader sense these are included in Software Testing Life Cycle (STLC), but may vary with terms and terminology.  For Medical Device testing, I usually follow Verification, Testing and then Validation procedure. There is no hard and fast rule, but it seems fairly reasonable depending on the SDLC, environment and the product i'm working on. 

Following I'll be trying to give description about those terms.


Verification is a process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. It is concerned about whether software is error-free, well designed, documented, etc. It often does not require executing the code. Rather it involves human based checking of documents and files.  It is not about finding issues with the software.  It popularly defined as "Are we building the system right?".

During our software development, we verify whether the requirement analysis was done correctly, whether the design we are going to implement serve the purpose of the software etc. Verification includes, Code inspection, consistency checking, static analysis. Verification do include testing but also includes reviews and other activities that have nothing to do with testing.

Testing, according to David Vogel, "it is one of several verification activities, intended to confirm that software development output meets its input requirement. " Testing does not cover design issues, requirement issues, documentation issues and many other like this. It is more functional oriented related to the software. Testing mostly done on specific method/ features/ usecases. During the testing, we do not look into the other issues that we do in verification stage.

Validation is process of checking whether it satisfies specified requirement. It can be done during or at the end of the software development process. It involves executing the codes. It can be also defined as "Are we building the right system?". Assuming that software specs were correct, validation ensures product actually meets users's requirements.

Validation is like a big umbrella. Underneath it includes Planning, Evaluation, Risk Analysis, Testing. So, it is involved through out the whole process of software life cycle. It's the complete coverage. [1]

Validation objective is to discover defects in a system  and assess whether or not the system is useful and usable in  operational situation. Validation should establishes  confidence that the software is fit for its purpose and does  what the user really requires.[3]

Relationship- Validation, verification and testing
I could not resist myself quoting from the book that just explains these things so perfectly.  As David A. Vogel described in his book with a Venn Diagram (which he explained in his book), 

"Validation is represented by outer ring of the diagram and wholly includes the verification and testing activities and yet there are validation activities that are neither verification nor testing activities. Verification activities are those which verify that individual design inputs have been properly addressed at each phase of the life cycle. Verification includes some testing. Testing verified that individual requirements have been implemented correctly." [2]

Figure: Venn Diagram for V&V and testing relationship
He also gave an visual representation for the software validation coverage as follows:

Figure: software validation coverage (Photo credit- David  A Vogel)
V&V are different, but we should be using them as combine tools through out the SDLC. According to Dolores R. Wallace and Roger U. Fujii, ”Software V&V is a systems engineering discipline which evaluates the software in a systems context, relative to all system elements of hardware, users, and other software." [1]


Sources:
For more details, anyone is interested, then follow my two sources. Buy the book of David Vogel, you will get more explanation and related topics and references. 

[1] The difference between Verification and Validation
[2] Medical Device Software Verification, Validation, and Compliance, by David A Vogel 
[3] I. Sommerville, , Software engineering, Addison – Wesley, eighth Edition, 2007.

Friday, May 16, 2014

Unannounced changes in code impacts testing

One of the common troublesome thing that I found in my career of software testing is how to handle the changes / change impacts. Incremental testing is a common practice now-a-days (I believe). When we have a build on our hand from developers, we do the testing on it, find issues and report it back to developer. Developer fixes the issue. Now, during this procedure, we expect that developer will not break anything that was working in previous build. Yeah right! We can only EXPECT (no offense to developer). As a human being it’s very much natural to break something while fixing it and that’s why we testers are their fall back that we will check all possibilities. Being human, we also inherit the same nature – may slip to find the issue. So what can be done in this regard? 

Answer would be simple- engage automated testing. Okay, I would accept that. Now can anyone tell me what are the areas/ or which test cases I would run? I would certainly take any answer, other than “Run all test cases” answer. Reason is simple; it may not be reasonable/ suitable/ applicable. 

Let’s try to understand it with a hypothetical example- simple deposit transaction. On a deposit transaction, we deposit the money to a particular bank account. For that we do some checking on the account- account number, name, any other info, any restrictions etc. After that, transaction gets posted to the account. So we tested the possible scenarios of a deposit account. Most of the thing was working fine, except it was not allowing editing the date of birth. So, on a next build, we expect the fix would come around that area. Straight and simple. We tested that particular area and found okay. Later, when we released the build found that it was taking awfully longer time to perform a deposit transaction. So what has happened here? Tester found the issue, developer fixed it, tester verified that particular fix. Tester does not know anything about the code. Does not have access to the code or does not even understand the code. At the end, the client is suffering from a bad build. 

Hmmm. Time to investigate. Backtrack the issue- tester tested that particular issue. Developer fixed that particular issue as well. When developer was in that class/method of fixing the issue, he accidentally commented out one line of code that was necessary / would increase the performance or some code that he felt unnecessary and removed the code (hypothetical scenario).  Tester was not aware of this particular incident.  If we would, we would have been running the performance test cases as well.  

Let’s see, now, we have automated test cases ready for regression testing. If we were to run the automated test cases for such incremental build, would I run the whole process, or just the portion of the deposit and related test cases.  Let’s say run most of the known related test cases. If we do that, at the end the software actually would give us test result as “Pass”. But would miss the point- transaction taking longer time.

The point is, we need to establish a process of clear communication among the testing team and development team about the change/touched areas. My personal opinion to make a matrix that help us to know about the change impacts. 

I would like to propose a matrix with few queries like: 

1. Functionality - What are the possible functionality that can have the impact for the change?
2. Performance -  Any chance to effect the performance of the existing operation?
3. Code Clean up - Does this build contains any cleanup by the developer? 
4. Optimization- Was this done to related to any fix or just a generic code optimization?
5. Bug fix - Was this build was specific to a isolated bug fix?
6. Enhancement - Was there any enhancement done with this build?
7. Common Utility function - Was there any change in the Utility function that was used  as common 
     function?

When developer make a change s/he should mark those parameters that is applicable for that change. May be putting some comments would give testers an idea to narrow down his possible testing areas. This will be the additional information for the tester. Based on the matrix tester then can decide what test cases to run or not. 

I was doing it verbally as I have the setup/ environment like that. Every time I get a new build, after performing all my regular testing, I consult with developer, ask him about the changes he made. I do it nicely so that he shares the information with me without getting offended. Then I check the code from the version control, tries to match up his statement. With my limited knowledge  I try to understand the changes in the code and then decide whether I should run anything more or not. I have been doing this for a long time and it saved my several times. 

This is not a new concept for the software development/testing world. Shared the experiences  that I have been facing for long time. Some companies (for example, Parasoft,  Microsoft Test Manager, Vector Software) already made software/solution that can keep track of this kind of issues. Even though, i'm not sure whether i'll get the answer's about the above mentioned parameters that I suggested for matrix. As long as we maintain a system/ structure that is the main goal. 

It's always better to work as team- development team, testing team. Goal is to Avoid miss communication and perform the task effectively

Tuesday, May 13, 2014

Story - Touchscreen display testing - One interesting finding

Testing software on PC is what most of the people are doing. I have the opportunity to test the program on device level. And the experience with the device level firmware/ software are different from desktop/ web application. Sometimes it's more exiting. Today I would like to share one experience with one device that is directly not related to software/firmware but the implications of it. 

We were working on a treadmill product that has the functionality of running the mill speed, incline / decline the mill position (it has few more advance features, but relative to this story). It is a software that is running on the touchscreen display that has windows 7 embedded operating system. All the buttons and everything is on the screen and it responds to user's touch. We were in the process of choosing touchscreen display that suits on our treadmill. So different people were doing looking/ testing from different angle. Industrial designer more emphasizing on the look and feel, whether it meets industry standard etc. Mechanical designer looking the display whether that will mount on the treadmill perfectly or not. Whether the mount will hold the display on the treadmill when people will be jumping/running on it fast etc. I'm not an expert on those fields so, i'm skipping the details of their process. 

Now for my part. First thing I had  to check whether it meets the optimum configuration with the hardware. Once the hardware meet that requirement, then checked the basic functionality. Then then the important thing is the Performance/Response. Think about it, its a treadmill firm/software. Speed button touch will increase/decrease the speed, stop button touch will stop the mill and so on. For extreme circumstances, when you hit the STOP button, you expect the mill will stop. Similarly when you touch decrease button it will reduce the speed. These functionalities sounds very basic, but if it is used by some patients for Rehab or elderly person, it may become a BIG issue. It's like a car, no matter what situation you would expect that break will stop the car. If the response of the touch is too low or too sensitive then we will have problems as well. 

Touch screen displays are usually of two kinds- Resistive and Capacitive.

"Resistive touchscreen comprises of several layers, out of which the flexible plastic and glass layers are two important electrically resistive layers. The front surface of resistive touchscreen panel is a scratch-resistant plastic with coating of a conductive material (mostly Indium Tin Oxide, ITO), printed underside. The second important layer is either made of glass or hard plastic and is also coated with ITO. Both the layers face each other and are separated with a thin gap in between. An electrical resistance is created between both the layers in such a way that charge runs from top to bottom in one layer and side-to-side in another." [1]

"Capacitive touchscreen also consists of two spaced layers of glass, which are coated with conductor such as Indium Tin Oxide (ITO). Human body is an electrical charge conductor. When a finger touches the glass of the capacitive surface, it changes the local electrostatic field. The system continuously monitors the movement of each tiny capacitor to find out the exact area where the finger had touched the screen." [1]

Both has advantages and disadvantages. Capacitive is highly touch sensitive and doesn't need a stylus and supports multi-touch. Resistive is high resistance to dust and water, low cost. Which type of touch screen to choose? It depends on where, how its going to be used, what are the purposes etc. 

For our case, as we were going to use it on rehab treadmill. As  test basis we used the Capacitive touch screen which has lot more advantages. Capacitive touch screen looked and felt good in terms of look and feel. But it failed in one interesting area - sweat! (Basic science - Salt water is a good conductive media.). 

We put some salt water on the display where we have buttons layed out. We saw that it was generating a touch on the button. And the worst case we saw it, when we put the salt water on the speed button, it was ramping up to the highest speed within few seconds. Imagine what would it do to the person on the mill, and what if it was a patient who will be doing a rehab. So, there ends the story of beautiful capacitive display for treadmill (until they fixed this issue). On the other hand despite of other limitations of Resistive touchscreen display, we are going back to resistive one.  

My intention was not to discuss about how to test touch screen display in general. Rather its about how to test products in practical way that actually can happen to end users based on the nature of the product. As it was going to be used on a product that will be used by patient,  patients safety comes first before starting any other type testing. Testing is very important. Testing from practical scenario is very important. It may save lives.



Communication - it's very important in recruiting people

One of the common part of our professional life is we get mails from recruiters time to time regardless whether you are looking for job o...