Wednesday, 15 June 2016

Adhoc Testing

When a software testing performed without proper planning and documentation, it is said to be Adhoc Testing. Such kind of tests are executed only once unless we uncover the defects.
Adhoc Tests are done after formal testing is performed on the application. Adhoc methods are the least formal type of testing as it is NOT a structured approach. Hence, defects found using this method are hard to replicate as there are no test cases aligned for those scenarios.
Testing is carried out with the knowledge of the tester about the application and the tester tests randomly without following the specifications/requirements. Hence the success of Adhoc testing depends upon the capability of the tester, who carries out the test. The tester has to find defects without any proper planning and documentation, solely based on tester's intuition.

When to Execute Adhoc Testing ?

Adhoc testing can be performed when there is limited time to do exhaustive testing and usually performed after the formal test execution. Adhoc testing will be effective only if the tester has in-depth understanding about the System Under Test.

Forms of Adhoc Testing :

  1. Buddy Testing: Two buddies, one from development team and one from test team mutually work on identifying defects in the same module. Buddy testing helps the testers develop better test cases while development team can also make design changes early. This kind of testing happens usually after completing the unit testing.
  2. Pair Testing: Two testers are assigned the same modules and they share ideas and work on the same systems to find defects. One tester executes the tests while another tester records the notes on their findings.
  3. Monkey Testing: Testing is performed randomly without any test cases in order to break the system.

Various ways to make Adhoc Testing More Effective

  1. Preparation: By getting the defect details of a similar application, the probability of finding defects in the application is more.
  2. Creating a Rough Idea: By creating a rough idea in place the tester will have a focussed approach. It is NOT required to document a detailed plan as what to test and how to test.
  3. Divide and Rule: By testing the application part by part, we will have a better focus and better understanding of the problems if any.
  4. Targeting Critical Functionalities: A tester should target those areas that are NOT covered while designing test cases.
  5. Using Tools: Defects can also be brought to the lime light by using profilers, debuggers and even task monitors. Hence being proficient in using these tools one can uncover several defects.
  6. Documenting the findings: Though testing is performed randomly, it is better to document the tests if time permits and note down the deviations if any. If defects are found, corresponding test cases are created so that it helps the testers to retest the scenario.

Active Testing?

Active testing, a testing technique, where the user introduces test data and analyses the result.
During Active testing, a tester builds a mental model of the software under test which continues to grow and refine as your interaction with the software continues.

How we do Active Testing ?

  1. At the end of each and every action performed on the application under test, we need to check if the model/application seems to fulfill client's needs.
  2. If not, the application needs to be adapted, or that we got a problem in the application. We continuously engage in the testing process and help us to come up with new ideas, test cases, test data to fulfill.
  3. At the same time, we need to note down things, which we might want to turn to later or we follow up with the concerned team on them, eventually finding and pinpointing problems in the software.
  4. Hence, any application under test needs active testing which involves testers who spot the defects.

accessibility Testing?

Accessibility testing is a subset of usability testing where in the users under consideration are people with all abilities and disabilities. The significance of this testing is to verify both usability and accessibility.
Accessibility aims to cater people of different abilities such as:
  • Visual ImpairmentsPhysical ImpairmentHearing ImpairmentCognitive ImpairmentLearning Impairment
A good web application should cater to all sets of people and NOT just limited to disabled people. These include:
  1. Users with poor communications infrastructure
  2. Older people and new users, who are often computer illiterate
  3. Users using old system (NOT capable of running the latest software)
  4. Users, who are using NON-Standard Equipment
  5. Users, who are having restricted access

How to Perform Accessibility Testing

The Web Accessibility Initiative (WAI) describes the strategy for preliminary and conformance reviews of web sites. The Web Accessibility Initiative (WAI) includes a list of software tools to assist with conformance evaluations. These tools range from specific issues such as colour blindness to tools that will perform automated spidering tools.

Web accessibility Testing Tools

ProductVendorURL
AccVerifyHiSoftwarehttp://www.hisoftware.com
BobbyWatchfirehttp://www.watchfire.com
WebXMWatchfirehttp://www.watchfire.com
Ramp AscendDequehttp://www.deque.com
InFocusSSB Technologieshttp://www.ssbtechnologies.com/

Role of Automated Tools in Acceptance Testing

The above said automated accessibility testing tools are very good at identifying pages and lines of code that need to be manually checked for accessibility.
  1. check the syntax of the site's code
  2. Search for known patterns that humans have listed
  3. identify pages containing elements that may cause problems
  4. identify some actual accessibility problems
  5. identify some potential problems
The interpretation of the results from the automated accessibility testing tools requires experience in accessibility techniques with an understanding of technical and usability issues.

Tuesday, 14 June 2016

Acceptance Testing?

"Acceptance testing, a testing technique performed to determine whether or not the software system has met the requirement specifications". The main purpose of this test is to evaluate the system's compliance with the business requirements and verify if it is has met the required criteria for delivery to end users.
There are various forms of acceptance testing:
  • --User acceptance Testing
    --Business acceptance Testing
    --Alpha Testing
    --Beta Testing

Acceptance Testing - In SDLC

The following diagram explains the fitment of acceptance testing in the software development life cycle.
The acceptance test cases are executed against the test data or using an acceptance test script and then the results are compared with the expected ones.


Acceptance Criteria

Acceptance criteria are defined on the basis of the following attributes
  • --Functional Correctness and Completeness
    --Data Integrity
    --Data Conversion
    --Usability
    --Performance
    --Timeliness
    --Confidentiality and Availability
    --Installability and Upgradability
    --Scalability
    --Documentation

Acceptance Test Plan - Attributes

The acceptance test activities are carried out in phases. Firstly, the basic tests are executed, and if the test results are satisfactory then the execution of more complex scenarios are carried out.
The Acceptance test plan has the following attributes:
  • Introduction
    Acceptance Test Category
    operation Environment
    Test case ID
    Test Title
    Test Objective
    Test Procedure
    Test Schedule
    Resources
The acceptance test activities are designed to reach at one of the conclusions:
  1. -Accept the system as delivered
    -Accept the system after the requested modifications have been made
    -Do not accept the system

Acceptance Test Report - Attributes

The Acceptance test Report has the following attributes:
  • Report Identifier
    Summary of Results
    Variations
    Recommendations
    Summary of To-DO List
  • Approval Decision