Friday, December 20, 2013

QA as a Career - should I choose Manual or Automated Testing path?

Which one is more important for QA person – learning automated testing or learning the fundamental of testing, learning how to find issues/ bugs of a Software/ Firmware. 

For last few years, what I have seen many people are more interested of taking the Software Testing as their career. There could be many reasons why they were thinking of choosing this as their career. I welcome them all for feeling interested in this field. What I have seen is –  they are taking a short course on Software Testing which will teach them about Software Testing concepts and automation tools as well, which is good. They are educating themselves before coming to the practical job field, which is the right way to do it. I had the opportunity to meet some of teacher and student of some of those courses. Talked about their courses a little bit, talked about these with the students who were so excited. Sounds all good so far. Then I get scared a bit when I came to know why they are so excited.

When talked with few of them, they said, they will be learning all the new automation tools and there is a big market for automated Software tester. So they concentrated more on automation testing then the concepts of Software Testing. I would not say its bad, but it will have effects at the end. I talked with them about the Manual Testing, did they have any plan to learn how to do it. It seemed they were more interested about automated testing than the manual testing as Manual Testing is dead/ Backdated method.

When people want to start their career in this field, few fundamental things one need to learn/ study about. Once that part is done then they can advance towards the automation testing. Automation testing is for those who know how to test, how to test software thoroughly. 

When anyone learned how to do the software testing, in other words manual testing, then they can advance their level up with the automated testing. What automation testing does is to help minimize the effort for Regression Testing of the software. It definitely not a tool that will identify the issues, does not help finding/reporting issues. So, before learning automation, I would prefer learning the software testing (manual) first and then go with the automated tools. 

What we do with the automation is write some scripts that will automatically read input data from somewhere and then will apply those inputs to software UI. For example, lets say I’m given to test a web application that will calculate how much student loan I need from the bank. There are three parts of that web application. One take the input, then process the input and then display the amount that is needed for loan from the bank. There are some input fields on the application categorized under Expenses and Income. Once those Expense and Income fields are populated with data, clicking the calculate button will calculate the loan amount. 

As a tester my job is to identify all possible scenarios that can happen on/with that screen. Start planning about what type of Testing you want to perform – Functional testing, Regression testing, Performance testing, load testing. And then write down those possible scenarios in the Test Cases. Find out all the valid, invalid set of inputs for each field. Write down the expected results. Once these things are done then I can say I’m ready for actual testing on the software. 

With the Manual Testing, enter all those values, checking all those combination of inputs with valid and invalid data sets, checking the output whether it matches with the expected results or not. If there is any problem then I will have to report the issue. These are few steps with Manual testing. Same thing apply with automated testing. I’ll have to do all these test scenarios. Only difference is, I’ll write some scripts, will run it with some automated tools. I will have an Excel sheet that will have some Test Data as input. On the automated tools I’ll write the code that will read from the excel sheet and will apply the data on the web application and then store back the calculated output to the excel sheet. Then looking at the excel sheet I can tell whether any test data set failed or not. 

Advantage of using automated tools is, I can run multiple times to check the integrity of the software. It’s faster any than the human effort of doing it same amount of time. Only side effect is, additional learning is required based on which automated testing tools will be using. 

The point I wanted to focus through out this writing is, I’ll have to know, develop myself to identify all the possible scenarios, all the possible test I can and should perform on the software. Running and maintaining script for automation testing is not the key, the key is developing the skill for finding issues of the software, methods to find issues, learn to know which tools to apply, when to apply. Tools come and goes, but the concept, methods stays. 

Have fun testing! Cheers! 

Update: 12/24/2013
In this writing I was referring  the "Manual Testing" as the basis of "Software Testing Concept". It's Testing Concept that is more important to learn/ study about and that's what I was trying to point out. Manual or Automated method both has their own pros and cons and can be argued both ways. It's like if you want to learn driving a car, would you prefer learning a car with stick or the auto. Decade ago this could be a choice, now its very rare to find a car with stick shift. Anyway, the goal is to learn the driving concepts.

Wednesday, December 18, 2013

Self Motivation in Software Testing


Self-motivation is a key factor of quality the Software Testing. Software testing is monotonous, working with same interface / screen over and over could be a boring at times. One will have to find out a way to avoid that scenario where they will feel bore about it and that would be the end of testing. Once  bored, one will find fewer issues about the software and the quality of the software will be compromised. There are many outside issues/ problems that can cause the frustration for the person who will be testing the software, but just being with one software/ product for a longer period of time, can affect largely finding software issues. So, we will have to find/ apply different techniques (no hard and fast rules) to keep ourselves motivated. Following  I’ll be trying to share my experience which worked for me in many projects to keep myself motivated in software testing. 

Testing Approach- Any software that I’ll be testing, prior to look into the software, I usually try to learn/study the details of the software/ product. Even if it’s written on the Specs, I try to get info from different related people through questionnaires. This gives me broader idea of what was supposed to be built, what was designed, and how it supposed to behave. Then I can double check my testing procedure (Test Plan, Test cases) whether I planned to cover most scenarios. 

Perform a Smoke Testing on the software before starting the testing according to the test cases.  Smoke testing actually gives the idea whether this software is ready for testing or not. If I find issues that Software does not run or crashes now and then, I ask for a rebuild of the project and then a newer version. This saves my time and effort to start the formal lengthy testing procedure. 

Prior starting the formal testing procedure, I start with light things like the User Interface stuff- alignment issues, correct text, correct icons on different states, and clean UI etc. No matter how silly, non-functional, less important issue it would be, User Interface is the first thing that User will look at. That’s the first impression user will get about the software. So, it’s important to have a good, clean, simple, understandable UI and my job is to make sure of it. When I find issues, I write them down (leaving the priority of the issue as blank). 

The next thing I do is another easy task, field validation-valid, invalid inputs, min-max checking of each field, tab orders(if applicable), size etc. Whatever Issue I find, I write them down. Most of the time, these issues gets the low/ lowest priority. 

At this pointif I have a good understanding & access to the developer, I can send them the issues so far I have found. If agreed, they can fix the low priority issues while I will be concentrating for main/ major issue findings. From my experience what I have seen is, at the initial stage after releasing the software for testing, developers have some time that they can use to look into low priority/ UI issues that they will not have the time later when I start to find the major issues. Finding major issues takes time. So for an optimum use of resources, I try to keep the developers busy (if agreed among all concern parties) fix those issues. If not, then I start recording the issues, prioritize them consulting with concerned parties.  

While I have few issues on board, I then start to follow the formal steps.  This may look like informal testing procedure, may go beyond my teams boundaries, but I have found it useful in-terms of optimize use of resources. For the formal steps, it’s quite common among the software industries. So I’m going to describe them here. 

One Important thing I have learned from my experience in software testing, as team member and/or as team lead is....
At no point of time, one should wait for another person to finish some task. For example, testing should not wait for the development to finish. Development and Testing should go side by side. This sometimes becomes a demotivated factor for the Testing Team members. They should be engaged in testing through out the whole SDLC. 
Some Suggestions -  During this process, like any other human being, I also get bored about the software after a while. I try to take different measure to keep myself motivated during this testing process. Few things I do (works for me, may not work for others) and suggest are:- 

  • Primary motivation comes from finding and reporting Issues. When I starts to report issues, in those easy areas (mentioned before), the number it-self motivates me. When I see I have like 20 issues already on board, it gives a good feelings and I get motivated enough to increase the number by finding more issues.  I use some numbers as Milestone - 25, 50, 100 number for reporting issues. The more I find issues, the more the number gets increased and I feel like i'm on right track. (I usually try to report quality issues, not just issues to reach milestones). 
  • frequently take breaks out of this testing procedure. Looking at the same screen time after time makes me more familiar with the screen. Once I get too familiarized, I started to feel that everything is as it should be. So, when I take a break, when I come back I can get a fresher look at the screen and sometimes I find issues from that. Btw, do not lose your focus/ concentration with the break.
    • When I feel like I’m running out of testing idea’s I go back to my drawing board and review all the things I was supposed to do and what I have done so far. Sometimes I find idea that I didn’t planned for before using the software. I update my document as well when I find things like this. 
    • Another thing I do when I run out of ideas, I intentionally plan to talk with developers to get more inner knowledge of the software. Black box testing is basis on without knowing internal knowledge. When I talk with them, little chitchat about how did they develop one particular features, or what conditions they used to plot a graph line etc. helps me to think of different testing ideas. And when there is broader area, more chances to find issues and number will increase. 
    For example, software supposed to plot a line graph based on a test result and date. On a date, a test can be performed, or can skipped but recorded. If there are test results for two dates, the line goes straight on the graph. If more than two and test performed on each day, then it connect all three points using curve line. If first day has a skip, then for next two day results will be a straight line, and there will not be any line drawn for first two points. If there is more than 10 points for multiple days, then logic is different then others. In short, there were different logic used for different case, single point, multiple point - 2, 3, 10, 11, 12 etc data set. Before knowing these logic, I was just testing it for few test data. When I came to know these logic, I started to layout different technique to test these scenarios.  
    With the ideal world, these things can be handled or tested on the white box testing level, but as an end user tester, I don’t believe what other did. I don’t assume even that this was tested at unit level. Caution: do not get biased developer.
    • Sometimes, when I feel that something is not right, may be it's according to the specs, I usually report it as Enhancement/ Observation just to avoid any further objection. And I support my findings with logical arguments from a user point of view. Enhancement is category where I actually can contribute my idea for the improvement of the product. Observation is something that I noticed, may be its working as designed, but something to discuss it further. I may not be the person who will take the final decision about the changes, at least, contributing ideas as Enhancement / Observation would definitely feel happy!

    Feel free to test anything and everything. Do not care the limit, do not care what you were not supposed to do it or not. Free your mind. Play the role of different level of a user. Find the things that you feel it can motivate you for testing. Use those techniques.

    Automating our testing is okay, helps us not to get monotonous/ bored. It’s the human being that designs the automated testing script. If you feel bored, it will affect you writing and updating scripts. Scripts can only do what you think about. If the thoughts are bounded by the tiredness, then it won’t help whether we use automated testing or manual testing.

    Lastly, to keep yourself self motivated in software testing, don't just test a software that is given to you, feel the ownership of the software. Have fun in Testing!

    Tuesday, October 22, 2013

    Test Plan- Software/ firmware testing building blocks

    We, the Test Engineers, are already familiar with the formal things about the Testing Building Blocks ( terminologies)– Test Plan, Test Cases documents etc. There are already existing different ideas/ thoughts about this Test Plan, Test Cases. Many big companies like Microsoft, IBM, Google - tries to provide guidelines about these documents. Even IEEE has a segment where it specifies some guidelines about the Test Plan document. All of them are good, no doubt. They tried to put some industry standard in software / firmware testing. So, I’m not going to give any new idea or standard here, would rather summaries/ consolidate all of them under one document that will cover both the Software and Firmware area. 

    In the following section, I will be a writing about Test Plan (TP) Template  that can be used as a generic template. This will be based on the Guideline provided by ANSI/IEEE standard 829-1983/ 829-1998- IEEE standard for Software Test Documentation. I have modified it in different places and re-organized it in a little bit differed way. All credit goes to the those who have written this steps as mentioned at the end of the blog. 

    What is TEST PLAN?

    As defined in IEEE 829-1998, "Test plan is a management planning document describing the scope, approach, resources, and schedule of intended testing activities. Test plan identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning."

    GENERIC TEST PLAN OUtline / Template- AT A GLANCE


    It may vary with based on nature of the Product/ Business/ Budget.

    1. Test Plan Identifier
    2. Introduction
    3. References Documents
    4. Objective
                 a. Business Objectives
                 b. Test Objectives
                 c. Quality Objectives
    5. Scope

    6. Test Items

    7. Features to Be Tested

    8. Features Not to Be Tested
    9. Product’s Testing Strategy / Approach
                   a. Unit Testing
                   b. Component Testing
                   c. Black Box Testing 
                   d. Integration Testing
                   e. Conversion Testing
                   f. System Testing
                   g. User Interface Testing
                   h. Security Testing
                   i. Recovery Testing
                   j. Globalization Testing
                   k. Performance Testing
                   l. Regression Testing
                   m. Load Testing
                   n. User Scenario Testing
                   o. User Acceptance Testing
                   p. Beta Testing
    10. Pass/ Fail Criteria
    11. Testing Process 
                 a. Test Deliverables
                 b. Testing Tasks
    12. Test Management
                 a. Individual roles and responsibilities
                 b. Schedules 
                 c. Staffing and Training 
                 d. Risk Assumptions and Constraints
    13. Environmental Requirement 
    14. Control Procedure
    15. Approvals


    GENERIC TEST PLAN OUTLINE / TEMPLATE- DESCRIPTION

    Detail descriptions of each section as follows:

    1. Test Plan Identifier  – A unique identifier that can be based on product or it can be any number that can identify the document uniquely.

    2. Introduction – This section provides an overview/ history of the project. Briefly describes the items and features to be tested.

    3. References Documents – Provide the references of different documents that are related, such as, Project Authorization, Project Plan, QA plan, Configuration Management Plan etc.

    4. Objective – 
    Business Objectives 
    Specify the business objectives for the release. What aspect/ features are more important business wise. It will be given the highest priority and more extensive effort to it. 

    Test Objectives  
    Identify the success criteria for the project that can be measured and reported. You can define the goals for the planned testing effort. For example, an objective might be to track successes, failures, defect status, and issues in order to provide feedback to development before software is delivered to customers.3

    Quality Objectives 
    Review lists in table format the overall quality goals for a release, as well as the required entry and exit criteria for testing. Quality objectives are defined at the project level and implemented in individual test plans, where you can track whether each objective has been met. Typically, quality objectives provide various measurements of quality for the overall release, for example, the number or percentage of high severity defects that are allowed or the number of failed execution records that are permitted.3

    5. ScopeSpecify the scope of the Test Plan. Describe specifically what the testing should accomplish, what to test and what not to test. For example, it can be limited to test on three major Operating System and not to worry about other OS. 

    6. Test Items – Specifies the things that are to be tested within the scope of the test plan- different functions of the software. Also provide the references to the required documents - Requirement doc, Design doc, Architectural doc etc. 

    7. Features to Be Tested – Mention all the features and combination of features/ functions that to be tested.

    8.Features Not to Be Tested – Mention all features and specific combinations of features that will not be tested along with the reasons. 

    9. Product’s Testing Strategy / Approach – Following Testing methods will vary to company to company. Usually I do the following testing in different phases so that I do not take to much time to make the software/ firmware perfect which can create big risk for company to loose the whole product in a competitive market. 

    PHASE - I

    Unit Testing 
    Unit testing is testing directly at the most granular level. If given a method that takes two values and return a positive result. Does the method fails (crashes, throws an exception, etc) if either of the values is null or invalid? Does it return valid results given a specific set of values? Does it fail if given an incorrect set of values?

    Component Testing 
    Similar to unit testing but with a higher level of integration. The big difference here is that the testing is done in the context of the application instead of just directly testing the method in question. The purpose of component testing is to ensure that the program logic is complete and correct and ensuring that the component works as designed.2

    Black Box Testing 
    Black box testing assumes the code to be a black box that responds to input stimuli. The testing focuses on the output to various types of stimuli in the targeted deployment environments. It focuses on validation tests, boundary conditions, destructive testing, reproducible tests, performance tests, globalization, and security-related testing.

    Integration Testing 
    Testing conducted in which software elements, hardware elements, or both are combined and tested until the entire system has been integrated. The purpose of integration testing is to ensure that design objectives are met and ensures that the software, as a complete entity, complies with operational requirements. Integration testing is also called System Testing. 4

    Conversion Testing 
    Testing performed to make sure if there is  any  old/ legacy system exists then, data is converted from old to new are properly done and does not break the integrity of the data on the new system. 

    System Testing 
    Testing performed to confirm that software and/or hardware testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. Testing to ensure that the application operates in the production environment.

    User Interface Testing 
    Testing done to ensure that the application operates efficiently and effectively outside the application boundary with all interface systems.4

    Security Testing 
    Testing done to ensure that the application systems control and audit-ability features of the application are functional.4

    Recovery Testing 
    Testing done to ensure that application restart and backup and recovery facilities operate as designed.4

    Globalization Testing 
    Execute test cases to ensure that the application block can be integrated with applications targeted toward locales other than the default locale used for development.2


    PHASE - II

    Performance Testing 
    Testing done to ensure that the application performs to customer expectations response time, availability, portability, and scalability. 4

    Regression Testing 
    Testing done to ensure that the applied changes to the application have not adversely affected previously tested functionality.4

    Load Testing 
    Load test the application block to analyze the behavior at various load levels. This ensures that it meets all performance objectives that are stated as requirements.2


    PHASE - III

    User Scenario Testing 
    Testing done to ensure that all the possible scenarios that can be performed by users. Think out of the box scenarios. Think as users to create generate scenarios that user can do. It could be positive and/or negative testing. Go through all the mouse clicks and keyboard presses that the user may go through to get an action done (including logical and illogical steps). Aim for the “1% of people will do it“ scenarios. (I wrote a blog about it last month. You may find it interesting here). 

    User Acceptance Testing
    Testing conducted to determine whether or not a system satisfies the acceptance criteria and to enable the customer to determine whether or not to accept the system. Acceptance testing ensures that customer requirements' objectives are met and that all components are correctly included in a customer package.2

    Beta Testing
    Testing, done by the customer, using a pre-release version of the product to verify and validate that the system meets business functional requirements. The purpose of beta testing is to detect application faults, failures, and defects.4

    10. Pass/ Fail Criteria – Specify the criteria to be used to determine whether each item has passed or failed testing.4

    Suspension Criteria
    Specify the criteria used to suspend all or a portion of the testing activity on test items associated with the plan.

    Resumption Criteria
    Specify the conditions that need to be met to resume testing activities after suspension. Specify the test items that must be repeated when testing is resumed.

    Approval Criteria
    Specify the conditions that need to be met to approve test results. Define the formal testing approval process.

    11. Testing Process – Identify the methods and criteria used in performing test activities. Define the specific methods and procedures for each type of test. Define the detailed criteria for evaluating test results.

    Test Deliverable
    Identify the deliverable documents from the test process. Test input and output data should be identified as deliverable  Testing report logs, test incident reports, test summary reports, and metrics' reports must be considered testing deliverable4

    Testing Tasks
    Identify the set of tasks necessary to prepare for and perform testing activities. Identify all inter task dependencies and any specific skills required.4

    12. Test Management – 

    Individual roles and responsibilities
    Identify the groups responsible for managing, designing, preparing, executing, witnessing, checking, and resolving test activities. These groups may include the developers, testers, operations staff, technical support staff, data administration staff, and the user staff.4

    Schedule
    Identify the high level schedule for each testing task. Establish specific milestones for initiating and completing each type of test activity, for the development of a comprehensive plan, for the receipt of each test input, and for the delivery of test output. Estimate the time required to do each test activity. When planning and scheduling testing activities, it must be recognized that the testing process is iterative based on the testing task dependencies.4

    Staffing and Training
    Identify the resources allocated for the performance of testing tasks. Identify the organizational elements or individuals responsible for performing testing activities. Assign specific responsibilities. Specify resources by category. If automated tools are to be used in testing, specify the source of the tools, availability, and the usage requirements.4

    Risks and Assumptions
    Risk analysis should be done to estimate the amount and the level of testing that needs to be done. Risk analysis gives the necessary criteria about when to stop the testing process. Risk analysis prioritizes the test cases. It takes into account the impact of the errors and the probability of occurrence of the errors.2

    13. Environmental Requirement  – Specify both the necessary and desired properties of the test environment.

    Hardware Identify the computer accessories/ physical device(s)/ related hardware(s) and network requirements needed to complete test activities.4

    Software
    Identify the software requirements needed to complete testing activities.  4

    Security
    Identify the testing environment security and asset protection requirements.4

    Tools
    Identify the special software tools, techniques, and methodologies employed in the testing efforts. The purpose and use of each tool shall be described. Plans for the acquisition, training, support, and qualification for each tool or technique. It could be different automation tools, could be tools for performance, load balancing testing.4

    14. Control Procedure- 5

    Problem Reporting
    Document the procedures to follow when an incident is encountered during the testing process. If a standard bug reporting process is already there that mention the Product/Project  Name where all the bugs will be reported. 

    Change Requests
    Document the process of modifications to the software. Identify who will sign off on the changes and what would be the criteria for including the changes to the current product. 

    Dependencies
    For any change request, if it affects existing programs, then these modules need to be identified first. 

    15. Approvals – Identify the plan approvers. List the name, signature and date of plan approval.


    Wow! It became a long template. I know its so many things we need to record. We will do it once  or in-frequent times. Its part of the painful documentation process for a Test Engineer. But it is a vital part of the Testing Process. 


    Sources/ Credits:
    5. Test plan sample: SoftwareTesting and Quality assurance Templates 
    6. Medical Device Software- Verification, Validation and Compliance - by David A. Vogel.  

    Communication - it's very important in recruiting people

    One of the common part of our professional life is we get mails from recruiters time to time regardless whether you are looking for job o...