Wednesday, 15 June 2016

Adhoc Testing

When a software testing performed without proper planning and documentation, it is said to be Adhoc Testing. Such kind of tests are executed only once unless we uncover the defects.
Adhoc Tests are done after formal testing is performed on the application. Adhoc methods are the least formal type of testing as it is NOT a structured approach. Hence, defects found using this method are hard to replicate as there are no test cases aligned for those scenarios.
Testing is carried out with the knowledge of the tester about the application and the tester tests randomly without following the specifications/requirements. Hence the success of Adhoc testing depends upon the capability of the tester, who carries out the test. The tester has to find defects without any proper planning and documentation, solely based on tester's intuition.

When to Execute Adhoc Testing ?

Adhoc testing can be performed when there is limited time to do exhaustive testing and usually performed after the formal test execution. Adhoc testing will be effective only if the tester has in-depth understanding about the System Under Test.

Forms of Adhoc Testing :

  1. Buddy Testing: Two buddies, one from development team and one from test team mutually work on identifying defects in the same module. Buddy testing helps the testers develop better test cases while development team can also make design changes early. This kind of testing happens usually after completing the unit testing.
  2. Pair Testing: Two testers are assigned the same modules and they share ideas and work on the same systems to find defects. One tester executes the tests while another tester records the notes on their findings.
  3. Monkey Testing: Testing is performed randomly without any test cases in order to break the system.

Various ways to make Adhoc Testing More Effective

  1. Preparation: By getting the defect details of a similar application, the probability of finding defects in the application is more.
  2. Creating a Rough Idea: By creating a rough idea in place the tester will have a focussed approach. It is NOT required to document a detailed plan as what to test and how to test.
  3. Divide and Rule: By testing the application part by part, we will have a better focus and better understanding of the problems if any.
  4. Targeting Critical Functionalities: A tester should target those areas that are NOT covered while designing test cases.
  5. Using Tools: Defects can also be brought to the lime light by using profilers, debuggers and even task monitors. Hence being proficient in using these tools one can uncover several defects.
  6. Documenting the findings: Though testing is performed randomly, it is better to document the tests if time permits and note down the deviations if any. If defects are found, corresponding test cases are created so that it helps the testers to retest the scenario.

Active Testing?

Active testing, a testing technique, where the user introduces test data and analyses the result.
During Active testing, a tester builds a mental model of the software under test which continues to grow and refine as your interaction with the software continues.

How we do Active Testing ?

  1. At the end of each and every action performed on the application under test, we need to check if the model/application seems to fulfill client's needs.
  2. If not, the application needs to be adapted, or that we got a problem in the application. We continuously engage in the testing process and help us to come up with new ideas, test cases, test data to fulfill.
  3. At the same time, we need to note down things, which we might want to turn to later or we follow up with the concerned team on them, eventually finding and pinpointing problems in the software.
  4. Hence, any application under test needs active testing which involves testers who spot the defects.

accessibility Testing?

Accessibility testing is a subset of usability testing where in the users under consideration are people with all abilities and disabilities. The significance of this testing is to verify both usability and accessibility.
Accessibility aims to cater people of different abilities such as:
  • Visual ImpairmentsPhysical ImpairmentHearing ImpairmentCognitive ImpairmentLearning Impairment
A good web application should cater to all sets of people and NOT just limited to disabled people. These include:
  1. Users with poor communications infrastructure
  2. Older people and new users, who are often computer illiterate
  3. Users using old system (NOT capable of running the latest software)
  4. Users, who are using NON-Standard Equipment
  5. Users, who are having restricted access

How to Perform Accessibility Testing

The Web Accessibility Initiative (WAI) describes the strategy for preliminary and conformance reviews of web sites. The Web Accessibility Initiative (WAI) includes a list of software tools to assist with conformance evaluations. These tools range from specific issues such as colour blindness to tools that will perform automated spidering tools.

Web accessibility Testing Tools

ProductVendorURL
AccVerifyHiSoftwarehttp://www.hisoftware.com
BobbyWatchfirehttp://www.watchfire.com
WebXMWatchfirehttp://www.watchfire.com
Ramp AscendDequehttp://www.deque.com
InFocusSSB Technologieshttp://www.ssbtechnologies.com/

Role of Automated Tools in Acceptance Testing

The above said automated accessibility testing tools are very good at identifying pages and lines of code that need to be manually checked for accessibility.
  1. check the syntax of the site's code
  2. Search for known patterns that humans have listed
  3. identify pages containing elements that may cause problems
  4. identify some actual accessibility problems
  5. identify some potential problems
The interpretation of the results from the automated accessibility testing tools requires experience in accessibility techniques with an understanding of technical and usability issues.

Tuesday, 14 June 2016

Acceptance Testing?

"Acceptance testing, a testing technique performed to determine whether or not the software system has met the requirement specifications". The main purpose of this test is to evaluate the system's compliance with the business requirements and verify if it is has met the required criteria for delivery to end users.
There are various forms of acceptance testing:
  • --User acceptance Testing
    --Business acceptance Testing
    --Alpha Testing
    --Beta Testing

Acceptance Testing - In SDLC

The following diagram explains the fitment of acceptance testing in the software development life cycle.
The acceptance test cases are executed against the test data or using an acceptance test script and then the results are compared with the expected ones.


Acceptance Criteria

Acceptance criteria are defined on the basis of the following attributes
  • --Functional Correctness and Completeness
    --Data Integrity
    --Data Conversion
    --Usability
    --Performance
    --Timeliness
    --Confidentiality and Availability
    --Installability and Upgradability
    --Scalability
    --Documentation

Acceptance Test Plan - Attributes

The acceptance test activities are carried out in phases. Firstly, the basic tests are executed, and if the test results are satisfactory then the execution of more complex scenarios are carried out.
The Acceptance test plan has the following attributes:
  • Introduction
    Acceptance Test Category
    operation Environment
    Test case ID
    Test Title
    Test Objective
    Test Procedure
    Test Schedule
    Resources
The acceptance test activities are designed to reach at one of the conclusions:
  1. -Accept the system as delivered
    -Accept the system after the requested modifications have been made
    -Do not accept the system

Acceptance Test Report - Attributes

The Acceptance test Report has the following attributes:
  • Report Identifier
    Summary of Results
    Variations
    Recommendations
    Summary of To-DO List
  • Approval Decision





Wednesday, 11 May 2016

Test planning process with Explanations

Stage #1: Review and analyze the requirements 
This is the first step for any project and plays a very important role in any testing project.
While trying to analyze the requirements, the test team has to identify and hence determine what items have to be tested. These items are heavily based on how the end user will consume the system and hence has to be measurable, detailed and meaningful.
The items or features that are identified generally describe what the particular software or product intends to do; characterized as functional requirements. There can also be some non-functional requirements identified such as performance or end to end software components’ interaction.
The people who’re aware of the business goal and can suitably define the requirements needed are generally part of this activity. The requirements are then documented and circulated for reviews. All the review comments and feedback must be incorporated to drive the document to the final sign-off.
Stage #2: Scope of testing
The scope of testing is generally an extension of the requirement analysis phase and mostly considered as a single activity, since they go hand in hand. Once the requirements are out, the test team determines what items are to be tested and what not.
This activity should also target to determine what areas of testing are covered by what teams.
For example: one team is dedicated for FVT (Function Verification Test) and SVT (System Verification Test) will have a completely different scope for testing, and globalization may or may not be performed by FVT and so on.
Also if the test project requires automation, the feasibility of that is also evaluated here. Having a clear scope defined, will prove invaluable to the management to clearly figure what has been tested and which team has covered the testing effort. 

Tuesday, 5 April 2016

Agile model – advantages, disadvantages and when to use it

Agile development model is also a type of Incremental model. Software is developed in incremental, rapid cycles. This results in small incremental releases with each release building on previous functionality. Each release is thoroughly tested to ensure software quality is maintained. It is used for time critical applications.  Extreme Programming (XP) is currently one of the most well known agile development life cycle model.
Diagram of Agile model:
Agile model in Software testing
Advantages of Agile model:
  • Customer satisfaction by rapid, continuous delivery of useful software.
  • People and interactions are emphasized rather than process and tools. Customers, developers and testers constantly interact with each other.
  • Working software is delivered frequently (weeks rather than months).
  • Face-to-face conversation is the best form of communication.
  • Close, daily cooperation between business people and developers.
  • Continuous attention to technical excellence and good design.
  • Regular adaptation to changing circumstances.
  • Even late changes in requirements are welcomed
Disadvantages of Agile model:
  • In case of some software deliverables, especially the large ones, it is difficult to assess the effort required at the beginning of the software development life cycle.
  • There is lack of emphasis on necessary designing and documentation.
  • The project can easily get taken off track if the customer representative is not clear what final outcome that they want.
  • Only senior programmers are capable of taking the kind of decisions required during the development process. Hence it has no place for newbie programmers, unless combined with experienced resources.
When to use Agile model:
  • When new changes are needed to be implemented. The freedom agile gives to change is very important. New changes can be implemented at very little cost because of the frequency of new increments that are produced.
  • To implement a new feature the developers need to lose only the work of a few days, or even only hours, to roll back and implement it.
  • Unlike the waterfall model in agile model very limited planning is required to get started with the project. Agile assumes that the end users’ needs are ever changing in a dynamic business and IT world. Changes can be discussed and features can be newly effected or removed based on feedback. This effectively gives the customer the finished system they want or need.
  • Both system developers and stakeholders alike, find they also get more freedom of time and options than if the software was developed in a more rigid sequential way. Having options gives them the ability to leave important decisions until more or better data or even entire hosting programs are available; meaning the project can continue to move forward without fear of reaching a sudden standstill.

difference between Severity and Priority?

There are two key things in defects of the software testing. They are:
1)     Severity
2)     Priority
What is the difference between Severity and Priority?
1)  Severity:
It is the extent to which the defect can affect the software. In other words it defines the impact that a given defect has on the system. For example: If an application or web page crashes when a remote link is clicked, in this case clicking the remote link by an user is rare but the impact of  application crashing is severe. So the severity is high but priority is low.
Severity can be of following types:
  • Critical: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable and there is no acceptable alternative method to achieve the required results then the severity will be stated as critical.
  • Major: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable but there exists an acceptable alternative method to achieve the required results then the severity will be stated as major.
  • Moderate: The defect that does not result in the termination, but causes the system to produce incorrect, incomplete or inconsistent results then the severity will be stated as moderate.
  • Minor: The defect that does not result in the termination and does not damage the usability of the system and the desired results can be easily obtained by working around the defects then the severity is stated as minor.
  • Cosmetic: The defect that is related to the enhancement of the system where the changes are related to the look and field of the application then the severity is stated as cosmetic.
2)  Priority:
Priority defines the order in which we should resolve a defect. Should   we fix it now, or can it wait? This priority status is set by the tester to the developer mentioning the time frame to fix the defect. If high priority is mentioned then the developer has to fix it at the earliest. The priority status is set based on the customer requirements. For example: If the company name is misspelled in the home page of the website, then the priority is high and severity is low to fix it.
Priority can be of following types:
  • Low: The defect is an irritant which should be repaired, but repair can be deferred until after more serious defect have been fixed.
  • Medium: The defect should be resolved in the normal course of development activities. It can wait until a new build or version is created.
  • High: The defect must be resolved as soon as possible because the defect is affecting the application or the product severely. The system cannot be used until the  repair has been done.
Few very important scenarios related to the severity and priority which are asked during the interview:
High Priority & High Severity: An error which occurs on the basic functionality of the application and will not allow the user to use the system. (Eg. A site maintaining the student details, on saving record if it, doesn’t allow to save the record then this is high priority and high severity bug.)
High Priority & Low Severity: The spelling mistakes that happens on the cover page or heading or title of an application.
High Severity & Low Priority: An error which occurs on the functionality of the application (for which there is no workaround) and will not allow the user to use the system but on click of link which is rarely used by the end user.
Low Priority and Low Severity: Any cosmetic or spelling issues which is within a paragraph or in the report (Not on cover page, heading, title).

Risk based testing & How to perform risk based testing?

Risk based testing is basically a testing done for the project based on risks. Risk based testing uses risk to prioritize and emphasize the appropriate tests during test execution. In simple terms – Risk is the probability of occurrence of an undesirable outcome. This outcome is also associated with an impact. Since there might not be sufficient time to test all functionality, Risk based testing involves testing the functionality which has the highest impact and probability of failure.
Risk-based testing is the idea that we can organize our testing efforts in a way that reduces the residual level of product risk when the system is deployed.
  • Risk-based testing starts early in the project, identifying risks to system quality and using that knowledge of risk to guide testing planning, specification, preparation and execution.
  • Risk-based testing involves both mitigation – testing to provide opportunities to reduce the likelihood of defects, especially high-impact defects – and contingency – testing to identify work-arounds to make the defects that do get past us less painful.
  • Risk-based testing also involves measuring how well we are doing at finding and removing defects in critical areas.
  • Risk-based testing can also involve using risk analysis to identify proactive opportunities to remove or prevent defects through non-testing activities and to help us select which test activities to perform.
The goal of risk-based testing cannot practically be – a risk-free project. What we can get from risk-based testing is to carry out the testing with best practices in risk management to achieve a project outcome that balances risks with quality, features, budget and schedule.
How to perform risk based testing?
  1. Make a prioritized list of risks.
  2. Perform testing that explores each risk.
  3. As risks evaporate and new ones emerge, adjust your test effort to stay focused on the current crop.

Monday, 4 April 2016

Use Stubs And Drivers?

Stubs are dummy modules that are always distinguish as "called programs", or you can say that is handle in integration testing (top down approach), it used when sub programs are under construction.

Stubs are considered as the dummy modules that always simulate the low level modules.

Drivers are also considered as the form of dummy modules which are always distinguished as "calling programs”, that is handled in bottom up integration testing, it is only used when main programs are under construction.

Drivers are considered as the dummy modules that always simulate the high level modules.

Example of Stubs and Drivers is given below:-

For Example we have 3 modules login, home, and user module. Login module is ready and need to test it, but we call functions from home and user (which is not ready). To test at a selective module we write a short dummy piece of a code which simulates home and user, which will return values for Login, this piece of dummy code is always called Stubs and it is used in a top down integration.  

Considering the same Example above: If we have Home and User modules get ready and Login module is not ready, and we need to test Home and User modules Which return values from Login module, So to extract the values from Login module We write a Short Piece of Dummy code for login which returns value for home and user, So these pieces of code is always called Drivers and it is used in Bottom Up Integration

Conclusion:-
So it is fine from the above example that Stubs act “called” functions in top down integration.Drivers are “calling” Functions in bottom up integration.
---------------------------------------------------------------------------------------------------------
The concept of Stubs and Drivers are mostly used in the case of component testing. Component testing may be done in isolation with the rest of the system depending upon the context of the development cycle.
Stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner.

Suppose you have a function (Function A) that calculates the total marks obtained by a student in a particular academic year. Suppose this function derives its values from another function (Function b) which calculates the marks obtained in a particular subject.

You have finished working on Function A and wants to test it. But the problem you face here is that you can't seem to run the Function A without input from Function B; Function B is still under development. In this case, you create a dummy function to act in place of Function B to test your function. This dummy function gets called by another function. Such a dummy is called a Stub.

To understand what a driver is, suppose you have finished Function B and is waiting for Function A to be developed. In this case you create a dummy to call the Function B. This dummy is called the driver.

Steps: Test planning process

Here below, is a walk-through of the various stages in the test planning process, discussed concisely.

Stage #1: Review and analyze the requirements 
Stage #2: Scope of testing
Stage #3: Design the test strategy according to the scope 
Stage #4: Identify the required tools needed for testing and management 
Stage #5: Estimate the test effort and team 
Stage #6: Define test schedule 
Stage #7: Enablement plan
Stage #8: Determine and procure the test environment
Stage #9: Identify test metrics 
Stage #10: Create the software test plan, reviews and approved 

Tuesday, 29 March 2016

how you would allocate task to team members ?

Task
Member
  • Analyze software requirement specification
  • All the members
  • Create the test specification
  • Tester/Test Analyst
  • Build up the test environment
  • Test administrator
  • Execute the test cases
  • Tester, Test administrator
  • Report defects
  • Tester

what is testing type and what are the commonly used testing type ?

To get an expected test outcome a standard procedure is followed which is referred as Testing Type.
Commonly used testing types are
  • Unit Testing:  Test the smallest code of an application
  • API Testing: Testing API created for the application
  • Integration Testing: Individual software modules are combined and tested
  • System Testing: Complete testing of system
  • Install/UnInstall Testing: Testing done from the point of client/customer view
  • Agile Testing: Testing through Agile technique

What are the common mistakes which creates issues ?


  • Matching resources to wrong projects
  • Test manager lack of skills
  • Not listening to others
  • Poor Scheduling
  • Underestimating
  • Ignoring the small problems
  • Not following the process

What does a typical test report contains? What are the benefits of test reports?

A test report contains following things:
  • Project Information
  • Test Objective
  • Test Summary
  • Defect
The benefits of test reports are:
  • Current status of project and quality of product are informed
  • If required, stake holder and customer can take corrective action
  • A final document helps to decide whether the product is ready for release

test management review and why it is important?

Management review is also referred as Software Quality Assurance or SQA. SQA focusses more on the software process rather than the software work products.  It is a set of activities designed to make sure that the project manager follows the standard process.  SQA helps test manager to benchmark the project against the set standards.

manual testing what are stubs and drivers?

Both stubs and drivers are part of incremental testing.  In incremental testing there are two approaches namely bottom up and top down approach. Drivers are used in bottom up testing and stub is used for top down approach. In order to test the main module, stub is used, whuich is a dummy code or program .

difference between Test matrix and Traceability matrix?

est Matrix:  Test matrix is used to capture actual quality, effort, the plan, resources and time required to capture all phases of software testing
Traceability Matrix:Mapping between test cases and customer requirements is known as Traceability Matrix

When is RTM (Requirement Traceability Matrix) prepared ?


RTM is prepared before test case designing.  Requirements should be traceable from review activities.

What are the best practices for software quality assurance?


The best practices for an effective SQA implementation is
  • Continuous Improvement
  • Documentation
  • Tool Usage
  • Metrics
  • Responsibility by team members
  • Experienced SQA auditors

what are the step you would follow once you find the defect?


Once defect is found you would follow the step
a)      Recreate the defect
b)      Attach the screen shot
c)       Log the defect

what is "Test Plan Driven" or "Key Word Driven" method of testing?

This technique uses the actual test case document developed by testers using a spread sheet containing special "key Words". The key words control the processing.

DFD (Data Flow Diagram) ?

When a "flow of data" through an information system is graphically represented then it is known as Data Flow Diagram.  It is also used for the visualization of data processing

LCSAJ

LCSAJ stands for 'linear code sequence and jump'. It consists of the following three items
a)      Start of the linear sequence of executable statements
b)      End of the linear sequence
c)       The target line to which control flow is transferred at the end of the linear sequence

what is N+1 testing?

The variation of regression testing is represented as N+1. In this technique the testing is performed in multiple cycles in which errors found in test cycle 'N' are resolved and re-tested in test cycle N+1.  The cycle is repeated unless there are no errors found.

What is Fuzz testing and when it is used?


Fuzz testing is used to detect security loopholes and coding errors in software.  In this technique random data is added to the system in attempt to crash the system.  If vulnerability persists, a tool called fuzz tester is used to determine potential causes. This technique is more useful for bigger projects but only detects major fault.

what are the main advantages of statement coverage metric of software testing?

The benefit of statement coverage metric is that
a)      It does not require processing source code and can be applied directly to object code
b)      Bugs are distributed evenly through code, due to which percentage of executable statements covered reflects the percentage of faults discovered

How to generate test cases for replace string method?

a)      If characters in new string > characters in previous string.  None of the characters should get truncated
b)      If characters in new string< characters in previous string.  Junk characters should not be added
c)       Spaces after and before the string should not be deleted
d)      String should be replaced only for the first occurrence of the string

will you handle a conflict amogst your team members ?

  • I will talk individually to each person and note their concerns
  • I will find solution to the common problems raised by team members
  • I will hold a team meeting , reveal the solution and ask people to co-operate

Mention what are the categories of defects?

Mainly there are three defect categories

  • Wrong: When requirement is implemented incorrectly
  • Missing: It is a variance from the specification, an indication that a specification was not implemented or a requirement of the customer is not met
  • Extra: A requirement incorporated into the product that was not given by the end customer. It is considered as a defect because it is a variance from the existing requirements

how does a test coverage tool works?

The code coverage testing tool runs parallel while performing testing on the actual product. The code coverage tool monitors the executed statements of the source code. When the final testing is done we get a complete report of the pending statements and also get the coverage percentag

difference between a "defect" and a "failure" in software testing?

In simple terms when a defect reaches the end customer it is called a failure while the defect is identified internally and resolved then it is referred as defect.

how to test documents in a project that span across the software development lifecycle?

The project span across the software development lifecycle in following manner

  • Central/Project test plan: It is the main test plan that outlines the complete test strategy of the project. This plan is used till the end of the software development lifecycle
  • Acceptance test plan: This document begins during the requirement phase and is completed at final delivery
  • System test plan: This plan starts during the design plan and proceeds until the end of the project
  • Integration and Unit test plan: Both these test plans start during the execution phase and last until the final delivery


which test cases are written first black boxes or white boxes?

Black box test cases are written first as to write black box test cases; it requires project plan and requirement document all these documents are easily available at the beginning of the project. While writing white box test cases requires more architectural understanding and is not available at the start of the project.

what is the difference between latent and masked defects?


  • Latent defect: A latent defect is an existing defect that has not caused a failure because the sets of conditions were never met
  • Masked defect: It is an existing defect that has not caused a failure because another defect has prevented that part of the code from being executed

what is bottom up testing?

Bottom up testing is an approach to integration testing, where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

what are the different types of test coverage techniques?

Different types of test coverage techniques include

  • Statement Coverage: It verifies that each line of source code has been executed and tested
  • Decision Coverage: It ensures that every decision in the source code is executed and tested
  • Path Coverage: It ensures that every possible route through a given part of code is executed and tested

what is the meaning of breadth testing?

Breadth testing is a test suite that exercises the full functionality of a product but does not test features in detail

what is the difference between Pilot and Beta testing?

The difference between pilot and beta testing is that pilot testing is actually done using the product by the group of user before the final deployment and in beta testing we do not input real data, but it is installed at the end customer to validate if the product can be used in production.