Wednesday, 15 June 2016

Adhoc Testing

When a software testing performed without proper planning and documentation, it is said to be Adhoc Testing. Such kind of tests are executed only once unless we uncover the defects.
Adhoc Tests are done after formal testing is performed on the application. Adhoc methods are the least formal type of testing as it is NOT a structured approach. Hence, defects found using this method are hard to replicate as there are no test cases aligned for those scenarios.
Testing is carried out with the knowledge of the tester about the application and the tester tests randomly without following the specifications/requirements. Hence the success of Adhoc testing depends upon the capability of the tester, who carries out the test. The tester has to find defects without any proper planning and documentation, solely based on tester's intuition.

When to Execute Adhoc Testing ?

Adhoc testing can be performed when there is limited time to do exhaustive testing and usually performed after the formal test execution. Adhoc testing will be effective only if the tester has in-depth understanding about the System Under Test.

Forms of Adhoc Testing :

  1. Buddy Testing: Two buddies, one from development team and one from test team mutually work on identifying defects in the same module. Buddy testing helps the testers develop better test cases while development team can also make design changes early. This kind of testing happens usually after completing the unit testing.
  2. Pair Testing: Two testers are assigned the same modules and they share ideas and work on the same systems to find defects. One tester executes the tests while another tester records the notes on their findings.
  3. Monkey Testing: Testing is performed randomly without any test cases in order to break the system.

Various ways to make Adhoc Testing More Effective

  1. Preparation: By getting the defect details of a similar application, the probability of finding defects in the application is more.
  2. Creating a Rough Idea: By creating a rough idea in place the tester will have a focussed approach. It is NOT required to document a detailed plan as what to test and how to test.
  3. Divide and Rule: By testing the application part by part, we will have a better focus and better understanding of the problems if any.
  4. Targeting Critical Functionalities: A tester should target those areas that are NOT covered while designing test cases.
  5. Using Tools: Defects can also be brought to the lime light by using profilers, debuggers and even task monitors. Hence being proficient in using these tools one can uncover several defects.
  6. Documenting the findings: Though testing is performed randomly, it is better to document the tests if time permits and note down the deviations if any. If defects are found, corresponding test cases are created so that it helps the testers to retest the scenario.

Active Testing?

Active testing, a testing technique, where the user introduces test data and analyses the result.
During Active testing, a tester builds a mental model of the software under test which continues to grow and refine as your interaction with the software continues.

How we do Active Testing ?

  1. At the end of each and every action performed on the application under test, we need to check if the model/application seems to fulfill client's needs.
  2. If not, the application needs to be adapted, or that we got a problem in the application. We continuously engage in the testing process and help us to come up with new ideas, test cases, test data to fulfill.
  3. At the same time, we need to note down things, which we might want to turn to later or we follow up with the concerned team on them, eventually finding and pinpointing problems in the software.
  4. Hence, any application under test needs active testing which involves testers who spot the defects.

accessibility Testing?

Accessibility testing is a subset of usability testing where in the users under consideration are people with all abilities and disabilities. The significance of this testing is to verify both usability and accessibility.
Accessibility aims to cater people of different abilities such as:
  • Visual ImpairmentsPhysical ImpairmentHearing ImpairmentCognitive ImpairmentLearning Impairment
A good web application should cater to all sets of people and NOT just limited to disabled people. These include:
  1. Users with poor communications infrastructure
  2. Older people and new users, who are often computer illiterate
  3. Users using old system (NOT capable of running the latest software)
  4. Users, who are using NON-Standard Equipment
  5. Users, who are having restricted access

How to Perform Accessibility Testing

The Web Accessibility Initiative (WAI) describes the strategy for preliminary and conformance reviews of web sites. The Web Accessibility Initiative (WAI) includes a list of software tools to assist with conformance evaluations. These tools range from specific issues such as colour blindness to tools that will perform automated spidering tools.

Web accessibility Testing Tools

ProductVendorURL
AccVerifyHiSoftwarehttp://www.hisoftware.com
BobbyWatchfirehttp://www.watchfire.com
WebXMWatchfirehttp://www.watchfire.com
Ramp AscendDequehttp://www.deque.com
InFocusSSB Technologieshttp://www.ssbtechnologies.com/

Role of Automated Tools in Acceptance Testing

The above said automated accessibility testing tools are very good at identifying pages and lines of code that need to be manually checked for accessibility.
  1. check the syntax of the site's code
  2. Search for known patterns that humans have listed
  3. identify pages containing elements that may cause problems
  4. identify some actual accessibility problems
  5. identify some potential problems
The interpretation of the results from the automated accessibility testing tools requires experience in accessibility techniques with an understanding of technical and usability issues.

Tuesday, 14 June 2016

Acceptance Testing?

"Acceptance testing, a testing technique performed to determine whether or not the software system has met the requirement specifications". The main purpose of this test is to evaluate the system's compliance with the business requirements and verify if it is has met the required criteria for delivery to end users.
There are various forms of acceptance testing:
  • --User acceptance Testing
    --Business acceptance Testing
    --Alpha Testing
    --Beta Testing

Acceptance Testing - In SDLC

The following diagram explains the fitment of acceptance testing in the software development life cycle.
The acceptance test cases are executed against the test data or using an acceptance test script and then the results are compared with the expected ones.


Acceptance Criteria

Acceptance criteria are defined on the basis of the following attributes
  • --Functional Correctness and Completeness
    --Data Integrity
    --Data Conversion
    --Usability
    --Performance
    --Timeliness
    --Confidentiality and Availability
    --Installability and Upgradability
    --Scalability
    --Documentation

Acceptance Test Plan - Attributes

The acceptance test activities are carried out in phases. Firstly, the basic tests are executed, and if the test results are satisfactory then the execution of more complex scenarios are carried out.
The Acceptance test plan has the following attributes:
  • Introduction
    Acceptance Test Category
    operation Environment
    Test case ID
    Test Title
    Test Objective
    Test Procedure
    Test Schedule
    Resources
The acceptance test activities are designed to reach at one of the conclusions:
  1. -Accept the system as delivered
    -Accept the system after the requested modifications have been made
    -Do not accept the system

Acceptance Test Report - Attributes

The Acceptance test Report has the following attributes:
  • Report Identifier
    Summary of Results
    Variations
    Recommendations
    Summary of To-DO List
  • Approval Decision





Wednesday, 11 May 2016

Test planning process with Explanations

Stage #1: Review and analyze the requirements 
This is the first step for any project and plays a very important role in any testing project.
While trying to analyze the requirements, the test team has to identify and hence determine what items have to be tested. These items are heavily based on how the end user will consume the system and hence has to be measurable, detailed and meaningful.
The items or features that are identified generally describe what the particular software or product intends to do; characterized as functional requirements. There can also be some non-functional requirements identified such as performance or end to end software components’ interaction.
The people who’re aware of the business goal and can suitably define the requirements needed are generally part of this activity. The requirements are then documented and circulated for reviews. All the review comments and feedback must be incorporated to drive the document to the final sign-off.
Stage #2: Scope of testing
The scope of testing is generally an extension of the requirement analysis phase and mostly considered as a single activity, since they go hand in hand. Once the requirements are out, the test team determines what items are to be tested and what not.
This activity should also target to determine what areas of testing are covered by what teams.
For example: one team is dedicated for FVT (Function Verification Test) and SVT (System Verification Test) will have a completely different scope for testing, and globalization may or may not be performed by FVT and so on.
Also if the test project requires automation, the feasibility of that is also evaluated here. Having a clear scope defined, will prove invaluable to the management to clearly figure what has been tested and which team has covered the testing effort. 

Tuesday, 5 April 2016

Agile model – advantages, disadvantages and when to use it

Agile development model is also a type of Incremental model. Software is developed in incremental, rapid cycles. This results in small incremental releases with each release building on previous functionality. Each release is thoroughly tested to ensure software quality is maintained. It is used for time critical applications.  Extreme Programming (XP) is currently one of the most well known agile development life cycle model.
Diagram of Agile model:
Agile model in Software testing
Advantages of Agile model:
  • Customer satisfaction by rapid, continuous delivery of useful software.
  • People and interactions are emphasized rather than process and tools. Customers, developers and testers constantly interact with each other.
  • Working software is delivered frequently (weeks rather than months).
  • Face-to-face conversation is the best form of communication.
  • Close, daily cooperation between business people and developers.
  • Continuous attention to technical excellence and good design.
  • Regular adaptation to changing circumstances.
  • Even late changes in requirements are welcomed
Disadvantages of Agile model:
  • In case of some software deliverables, especially the large ones, it is difficult to assess the effort required at the beginning of the software development life cycle.
  • There is lack of emphasis on necessary designing and documentation.
  • The project can easily get taken off track if the customer representative is not clear what final outcome that they want.
  • Only senior programmers are capable of taking the kind of decisions required during the development process. Hence it has no place for newbie programmers, unless combined with experienced resources.
When to use Agile model:
  • When new changes are needed to be implemented. The freedom agile gives to change is very important. New changes can be implemented at very little cost because of the frequency of new increments that are produced.
  • To implement a new feature the developers need to lose only the work of a few days, or even only hours, to roll back and implement it.
  • Unlike the waterfall model in agile model very limited planning is required to get started with the project. Agile assumes that the end users’ needs are ever changing in a dynamic business and IT world. Changes can be discussed and features can be newly effected or removed based on feedback. This effectively gives the customer the finished system they want or need.
  • Both system developers and stakeholders alike, find they also get more freedom of time and options than if the software was developed in a more rigid sequential way. Having options gives them the ability to leave important decisions until more or better data or even entire hosting programs are available; meaning the project can continue to move forward without fear of reaching a sudden standstill.

difference between Severity and Priority?

There are two key things in defects of the software testing. They are:
1)     Severity
2)     Priority
What is the difference between Severity and Priority?
1)  Severity:
It is the extent to which the defect can affect the software. In other words it defines the impact that a given defect has on the system. For example: If an application or web page crashes when a remote link is clicked, in this case clicking the remote link by an user is rare but the impact of  application crashing is severe. So the severity is high but priority is low.
Severity can be of following types:
  • Critical: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable and there is no acceptable alternative method to achieve the required results then the severity will be stated as critical.
  • Major: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable but there exists an acceptable alternative method to achieve the required results then the severity will be stated as major.
  • Moderate: The defect that does not result in the termination, but causes the system to produce incorrect, incomplete or inconsistent results then the severity will be stated as moderate.
  • Minor: The defect that does not result in the termination and does not damage the usability of the system and the desired results can be easily obtained by working around the defects then the severity is stated as minor.
  • Cosmetic: The defect that is related to the enhancement of the system where the changes are related to the look and field of the application then the severity is stated as cosmetic.
2)  Priority:
Priority defines the order in which we should resolve a defect. Should   we fix it now, or can it wait? This priority status is set by the tester to the developer mentioning the time frame to fix the defect. If high priority is mentioned then the developer has to fix it at the earliest. The priority status is set based on the customer requirements. For example: If the company name is misspelled in the home page of the website, then the priority is high and severity is low to fix it.
Priority can be of following types:
  • Low: The defect is an irritant which should be repaired, but repair can be deferred until after more serious defect have been fixed.
  • Medium: The defect should be resolved in the normal course of development activities. It can wait until a new build or version is created.
  • High: The defect must be resolved as soon as possible because the defect is affecting the application or the product severely. The system cannot be used until the  repair has been done.
Few very important scenarios related to the severity and priority which are asked during the interview:
High Priority & High Severity: An error which occurs on the basic functionality of the application and will not allow the user to use the system. (Eg. A site maintaining the student details, on saving record if it, doesn’t allow to save the record then this is high priority and high severity bug.)
High Priority & Low Severity: The spelling mistakes that happens on the cover page or heading or title of an application.
High Severity & Low Priority: An error which occurs on the functionality of the application (for which there is no workaround) and will not allow the user to use the system but on click of link which is rarely used by the end user.
Low Priority and Low Severity: Any cosmetic or spelling issues which is within a paragraph or in the report (Not on cover page, heading, title).