Saturday, September 28, 2013

Software Testing @ what stage of SDLC


A Software Development Life Cycle (SDLC) model represents all the activities and work products necessary to develop a software system. Life cycles models make the software development and dependencies visible and manageable.


Conventional waterfall has the phases in SDLC are:
1. Requirement & analysis
2. Design
3. Development (Coding)
4. Integration/Testing
5. Maintenance


From above, we can see that testing starts after the development of some functionality. Testing team is not involved in any of the first two steps. This is how I have seen in my career. Could be different in other companies, but I can guesstimate it will be more or less for the smaller to medium size companies. 


In waterfall/ modified waterfall, iteration method, design team designs the software based on the specs. Specs, are not always written with every scenarios, especially the odd ball scenarios are not present in the Specs. So what happen is, when we start testing those odd ball scenarios, it falls into a gray area. According to developer, it’s not in specs and as a tester (represents users of the real world) it’s a problem and it get stuck in some gray area and sometimes it involves one level upper position people to resolve this (unnecessary) dispute. 


We can minimize this type of scenario if we could involve the testing team members who will give input to the requirement/ design team from user's perspective. Many things can be discussed at the beginning so that it can be prevented to occur at the first place. 


I would like to share few of my experience here. (I believe testing team is the middle man between developer and user. So, testing will, in SDLC will represent user.)


#1. I worked on a core banking project. They were developing a portal interface. Design team designed a web interface that will take some input and will submit it to the database.  I was not part of the initial design team. I was invited to participate on their final design review. I looked at it. The first thing that struck my mind is what if I create an automated process and dump garbage data. Clearly that was not handled in the design. I'm talking about Challenge key. Even though it’s a very common thing now a days to handle, it wasn't like 10/15 years ago. Designed team then realize the issue and they added additional thing to protect it. 


#2. Another simple example. We had one product that was already in the production. It was decided to add a new feature to the existing system. So, the user, BA and developer was there. Testing team was not there. Developers implemented the feature based on the verbal interactions with the user. (Agile allows not to have comprehensive specs. That is more emphasize on human interaction less on papers. Hmm, sounds like a haven for the people who hates to document). Then it was given to the testing team. The first thing that testing team found was they broke the existing system. It was quickly reported and then fixed by developer. I think, this can be called an iteration. I was wondering in my mind, was this iteration was necessary? One the first thing testing team did check whether it broke any functionality or not. It could have been discussed during the brainstorming session. It was skipped (blaming the Agile!). It defiantly incurred some time and money. 


I can continue with few more examples like this, but I think the point is clear that we will have to have test plan for all phase of SDLC. This is what I felt at the very early stage of my career as tester. Apparently I was not the only one felt for it. So, the software testing community introduced some modification/ improvement of the traditional system. 

V-Model design is such an improvement of the existing waterfall method and can also be applied, to some extent on Iterative or Agile methodThis model is based on association of a testing phase for each corresponding development stage. This means that for every single phase/iteration in the development cycle there is a directly associated testing phase. I have chosen this one as it visually represents, easy to understand the concept of engaging testing in each phases, not necessarily the sequential phases of waterfall.

V- Model design:

(Photo credit – NaveenKumar Namachivayam / Google search..)
  
Following are the short description of validation phases in V-Model:
(I liked the description of Tutorials Point site. So I’ll be quoting following few para from them. http://www.tutorialspoint.com/sdlc/sdlc_v_model.htm)

Following are the Validation phases in V-Model:
Acceptance Testing: Acceptance testing is associated with the business requirement analysis phase and involves testing the product in user environment. Acceptance tests uncover the compatibility issues with the other systems available in the user environment. It also discovers the non functional issues such as load and performance defects in the actual user environment. 
System Testing: System testing is directly associated with the System design phase. System tests check the entire system functionality and the communication of the system under development with external systems. Most of the software and hardware compatibility issues can be uncovered during system test execution.
Integration Testing: Integration testing is associated with the architectural design phase. Integration tests are performed to test the coexistence and communication of the internal modules within the system.
Unit Testing: Unit tests designed in the module design phase are executed on the code during this validation phase. Unit testing is the testing at code level and helps eliminate bugs at an early stage, though all defects cannot be uncovered by unit testing.

Ok, lets says those are the steps in waterfall method which has many limitations. Now keeping these phases in mind, lets take a look at what Agile method is trying to say in this regard. From Microsoft's Testing Methodologies topic (http://msdn.microsoft.com/en-us/library/ff649520.aspx)- 
"In Agile, People and interactions are emphasized, rather than processes and tools. Customers, developers, and testers constantly interact with each other. This interaction ensures that the tester is aware of the requirements for the features being developed during a particular iteration and can easily identify any discrepancy between the system and the requirements.
Agile basically in favor of engaging testing in all phases (if I interpret it correctly). There is an advantage of Agile method- customer is a part of team. So, the gap between the developer and customer being reduced here. Customer does not always be there and does not necessarily understand all technical thing of the software. So, testing team still can be a bridge between developer and customer. Testing team also gets the advantage of knowing the user, user's requirement,  expectation and importantly, user's psychology. 

Another method in SDLC is Iterative method. It says, according to Microsoft, " the development model breaks the project into small parts. Each part is subjected to multiple iterations of the waterfall model. At the end of each iteration, a new module is completed or an existing one is improved on, the module is integrated into the structure, and the structure is then tested as a whole."- (http://msdn.microsoft.com/en-us/library/ff649520.aspx). Which in other words supports the above model, but in smaller iteration. 


Figure: Quality Assurance in the Iterative Model
Credit: 
snyders.us

Unified Process
Another popular methodology used by Rational Unified Process (RUP) and is performed in an iterative and incremental manner. The life cycle of the UP is presented in following Figure. 



UP divides the project into four phases which are shown above in Figure and discussed below[3]:

• Inception – By the end of this process a business case should have been made; feasibility of the project assessed; and the scope of the design should be set.

• Elaboration – In this phase a basic architecture should have been produced and a plan of construction agreed. Furthermore, a risk analysis takes place and those risks considered to be major should have been addressed.

• Construction – This process produces a beta-release system. A working system should be available and sufficient enough for preliminary testing under realistic conditions.

• Transition – The system is introduced to the stakeholders and intended users. It is crossed when the project team and the stakeholders agree that the objectives agreed in the  inception phase have been met and the user is satisfied


Some of the key features of the UP are as follows [2]:

It uses a component based architecture which creates a system that is easily extensible, promotes software reuse and intuitively understandable. The component commonly being used to coordinate object oriented programming projects.

• Uses visually modeling software such as UML – which represent its code as a diagrammatic notation to allow less technically competent individuals who may have a better understanding of the problem to have a greater input.

• Manage requirements using use-cases and scenarios have been found to be very effective at both capturing functional requirements and help in keeping sight of the anticipated behaviors of the system. 

• Design is iterative and incremental – this helps reduce project risk profile, allows greater customer feedback and help developers stay focused.

• Verifying software quality is very important in a software project. UP assists in planning quality control and assessment built into the entire process involving all member of the team


Waterfall, Iterative, Agile method or RUP- I personally don't think any of the model is good or bad. It depends on project environment , company structure, budget, nature of business etc. In short,  whichever model is more suitable for the company will be the good one.


Management may argue with this concept to engage Testing Team members at Requirement level and/or Design level which may add cost resource, time and cost, but that certainly will minimize the Testing Life Cycle cost by some early measurement / detection. 

References:
[1] Microsoft's Testing Methodologies topic (http://msdn.microsoft.com/en-us/library/ff649520.aspx

[2] B. Grady, C. Robert, J. Newkirk, Object Oriented Analysis and Design with
Applications, 2nd edition, Addison Wesley Longman, 1998

[3] P. Kruchten, “What is Rational Unified Process?”, The Rational Edge,
http://www.therationaledge.com/content/jan_01/f_rup_pk.html Accessed 2/2/2005

Tuesday, September 24, 2013

Test Plan and User (Human) Psychology & behavior

Test Plan: As defined in IEEE 829-1998, also known as the 829 Standard for software test documentation, test plan is a management planning document describing the scopeapproach, resources, and schedule of intended testing activities. Test plan identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. 

The above is a textbook definition of a test plan. As the definition says, it describes the scope, approaches of the test that's going to be performed on the target product. This does not describe/ mention anything about one important thing that I learned from my experience, people may not emphasize on this part is - the Test Strategy.  

So... What is Test Strategy then?- another textbook description - The test strategy is defined set of methods and objectives that direct test design and execution. The test strategy describes the overall testing approach for the testing of application under test including stages of testing, completion criteria, and general testing techniques. The test strategy forms the basis for test plans. 

Strategy plays the key role for the planning of "attacking on software". According to its It defines the testing approach. And the approach would be, in my opinion, "Attacking the Software" from every possible angle of a "User".  It not about only test methodologies, its about going beyond normal scenarios, scenarios that user may create accidentally or intentionally

Human psychology is an integral part of testing.  A Tester/ QA personnel should approach testing software thinking of him-self as a user- non tech, mid tech, high tech user. And even sometimes like crazy and/or baby.  Let me mention few examples that can relate the aspect of human psychology and different behavior from user point of view:

#1 Lets say I was given to test an old style Cassette player (if people still can remember what is a Cassette player...). My general approach would be to test all the functions according to the conventional test plan (derived from specs). So I would have test cases for each function, Play, Rewind, Forward, Stop, Record etc. The product is designed with buttons that user would be able to easily use it- could be a soft-touch, could be physical button etc. The general expectation (which actually leads toward limitation) of testing this product would be whether all the functions work properly and as expected or not. Usually, we would assume / design out testing thinking of user will be using it with "hand".  Would anyone/ any tester would think about user may actually want to use it using foot finger? – The answer could be 1% people or even less. To me, it’s not about the percent, it’s about whether I thought about it and designed my test cases for the user's like this or not. 

#2 Another example from real life- with the most recent IOS 7 release, one of the bugs that user found is, it allows to call from the lock screen. It takes seven consecutive enters/ click to make it happen. Now who would think about that magic number of tries (and why would even do that in the first place)? This could be a great place to engaged automated testing. This kind of scenario has to be covered up to certain try so that we will have less chance to have it found by the users. 

#3 I was working on one Automated Taller Machine (ATM) project of a bank around the year 2008. They were implementing ATM interface with the Core Banking Software (CBS). ATM was communicating with hardware and interfacing with CBS. I was engaged at the time of UAT sign off. I tried few things that I think customer might do and I was able to create one scenario. It was more of an implementation issue. 

When user sends a command for withdraw money, for example $100 ($20 x 5 bills), machine do all the checking with CBS and then count money and give the money through dispense. When money comes out through dispense, customer takes it or if it stays on dispense for 30 seconds, system takes it back inside. When it goes it inside, customer can claim that money later at bank and the use case ends there. The problem arises when I took one note from dispense and let it go back inside with 4 notes. Now, I'm still eligible to claim the money and bank has two things- one they had 4 notes in the tray and a withdraw transaction of 5 notes. The camera at the ATM counter did not able to give clear view. So, theoretically I'm entitled to get my $100 (if I don't fall into some text written in smaller font in terms and condition). This scenario created an issue and people were figuring out how to resolve it. This can be one of the example how user can do so many things that the typical test plan would not cover such issue. 
  
#4 Another experience would like to share. This time it was a simple login issue for a dial-up internet service. In the early days, around 2004, I used to use one dial-up internet connection from a company. One of my friends wanted to use my internet connection service for a short period of time and I gave him my credentials. So, when he uses it, I can’t use it at the same time. One day, we saw, we both were online! How did happened using one connection? We logged in at the exact same time- concurrent user. 2 persons using one in connection at one price! We didn't do not intentionally, just happened! 

I tried to mention some example in different stages of software- starting from design to implementation phase of the software. From these examples we can see that human psychology and behavior is something that needed to be learned/ designed to handle during testing.   Now the question is how can we handle such human behavior in our test life cycle? I would recommend adding section in the test strategy that will handle the user behavior based on user’s demographics- age, knowledge, gender, country etc. Interview user, get to know about their expectation, observe users point of view for the software. Even though it will vary user to user, at least we would know that we covered this kind of user behavior in our testing.    

Without having test cases like this, we can’t be just relying on the word that we are 80% confident of our testing; testing is done 80% etc. I feel the need of this kind of testing be mentioned in the test plan. We can argue on the priority or the weight-age of this kind of bug/ issues later but it at least gives us more coverage of testing and chances to minimize the bug. 

I believe, our job is to find issues, it's the products stake holder's job whether to decide to fix these kinds of issues or to release it with known issues as it involves resources, time and obviously money!

Off track observation- I have a 2 years old son. I gave my old blackberry and android phone to him. The way he actually operates it, it’s like crazy combination/ sequence. Randomness is his specialty. I tried to follow him and sometimes I wonder, if I were the tester of that product, would I ever think about anything like this in this order?! That's why I mentioned; sometimes we should extend our thinking beyond normal user- go crazy! 

That will be all for today. Currently, I'm in the middle of testing and releasing my product and I should go back from my testing break.....  

I'll have another blog where I'll try to mention at what stage testing can be started in a Software Development Life Cycle. 





Communication - it's very important in recruiting people

One of the common part of our professional life is we get mails from recruiters time to time regardless whether you are looking for job o...