Task
|
Member
|
|
|
|
|
|
|
|
|
|
|
Tuesday, 29 March 2016
how you would allocate task to team members ?
what is testing type and what are the commonly used testing type ?
To get an expected test outcome a standard procedure is followed which is referred as Testing Type.
Commonly used testing types are
- Unit Testing: Test the smallest code of an application
- API Testing: Testing API created for the application
- Integration Testing: Individual software modules are combined and tested
- System Testing: Complete testing of system
- Install/UnInstall Testing: Testing done from the point of client/customer view
- Agile Testing: Testing through Agile technique
What are the common mistakes which creates issues ?
- Matching resources to wrong projects
- Test manager lack of skills
- Not listening to others
- Poor Scheduling
- Underestimating
- Ignoring the small problems
- Not following the process
What does a typical test report contains? What are the benefits of test reports?
A test report contains following things:
- Project Information
- Test Objective
- Test Summary
- Defect
The benefits of test reports are:
- Current status of project and quality of product are informed
- If required, stake holder and customer can take corrective action
- A final document helps to decide whether the product is ready for release
test management review and why it is important?
Management review is also referred as Software Quality Assurance or SQA. SQA focusses more on the software process rather than the software work products. It is a set of activities designed to make sure that the project manager follows the standard process. SQA helps test manager to benchmark the project against the set standards.
manual testing what are stubs and drivers?
Both stubs and drivers are part of incremental testing. In incremental testing there are two approaches namely bottom up and top down approach. Drivers are used in bottom up testing and stub is used for top down approach. In order to test the main module, stub is used, whuich is a dummy code or program .
difference between Test matrix and Traceability matrix?
est Matrix: Test matrix is used to capture actual quality, effort, the plan, resources and time required to capture all phases of software testing
Traceability Matrix:Mapping between test cases and customer requirements is known as Traceability Matrix
When is RTM (Requirement Traceability Matrix) prepared ?
RTM is prepared before test case designing. Requirements should be traceable from review activities.
What are the best practices for software quality assurance?
The best practices for an effective SQA implementation is
- Continuous Improvement
- Documentation
- Tool Usage
- Metrics
- Responsibility by team members
- Experienced SQA auditors
what are the step you would follow once you find the defect?
Once defect is found you would follow the step
a) Recreate the defect
b) Attach the screen shot
c) Log the defect
what is "Test Plan Driven" or "Key Word Driven" method of testing?
This technique uses the actual test case document developed by testers using a spread sheet containing special "key Words". The key words control the processing.
DFD (Data Flow Diagram) ?
When a "flow of data" through an information system is graphically represented then it is known as Data Flow Diagram. It is also used for the visualization of data processing
LCSAJ
LCSAJ stands for 'linear code sequence and jump'. It consists of the following three items
a) Start of the linear sequence of executable statements
b) End of the linear sequence
c) The target line to which control flow is transferred at the end of the linear sequence
what is N+1 testing?
The variation of regression testing is represented as N+1. In this technique the testing is performed in multiple cycles in which errors found in test cycle 'N' are resolved and re-tested in test cycle N+1. The cycle is repeated unless there are no errors found.
What is Fuzz testing and when it is used?
Fuzz testing is used to detect security loopholes and coding errors in software. In this technique random data is added to the system in attempt to crash the system. If vulnerability persists, a tool called fuzz tester is used to determine potential causes. This technique is more useful for bigger projects but only detects major fault.
what are the main advantages of statement coverage metric of software testing?
The benefit of statement coverage metric is that
a) It does not require processing source code and can be applied directly to object code
b) Bugs are distributed evenly through code, due to which percentage of executable statements covered reflects the percentage of faults discovered
How to generate test cases for replace string method?
a) If characters in new string > characters in previous string. None of the characters should get truncated
b) If characters in new string< characters in previous string. Junk characters should not be added
c) Spaces after and before the string should not be deleted
d) String should be replaced only for the first occurrence of the string
will you handle a conflict amogst your team members ?
- I will talk individually to each person and note their concerns
- I will find solution to the common problems raised by team members
- I will hold a team meeting , reveal the solution and ask people to co-operate
Mention what are the categories of defects?
Mainly there are three defect categories
- Wrong: When requirement is implemented incorrectly
- Missing: It is a variance from the specification, an indication that a specification was not implemented or a requirement of the customer is not met
- Extra: A requirement incorporated into the product that was not given by the end customer. It is considered as a defect because it is a variance from the existing requirements
how does a test coverage tool works?
The code coverage testing tool runs parallel while performing testing on the actual product. The code coverage tool monitors the executed statements of the source code. When the final testing is done we get a complete report of the pending statements and also get the coverage percentag
difference between a "defect" and a "failure" in software testing?
In simple terms when a defect reaches the end customer it is called a failure while the defect is identified internally and resolved then it is referred as defect.
how to test documents in a project that span across the software development lifecycle?
The project span across the software development lifecycle in following manner
- Central/Project test plan: It is the main test plan that outlines the complete test strategy of the project. This plan is used till the end of the software development lifecycle
- Acceptance test plan: This document begins during the requirement phase and is completed at final delivery
- System test plan: This plan starts during the design plan and proceeds until the end of the project
- Integration and Unit test plan: Both these test plans start during the execution phase and last until the final delivery
which test cases are written first black boxes or white boxes?
Black box test cases are written first as to write black box test cases; it requires project plan and requirement document all these documents are easily available at the beginning of the project. While writing white box test cases requires more architectural understanding and is not available at the start of the project.
what is the difference between latent and masked defects?
- Latent defect: A latent defect is an existing defect that has not caused a failure because the sets of conditions were never met
- Masked defect: It is an existing defect that has not caused a failure because another defect has prevented that part of the code from being executed
what is bottom up testing?
Bottom up testing is an approach to integration testing, where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
what are the different types of test coverage techniques?
Different types of test coverage techniques include
- Statement Coverage: It verifies that each line of source code has been executed and tested
- Decision Coverage: It ensures that every decision in the source code is executed and tested
- Path Coverage: It ensures that every possible route through a given part of code is executed and tested
what is the meaning of breadth testing?
Breadth testing is a test suite that exercises the full functionality of a product but does not test features in detail
what is the difference between Pilot and Beta testing?
The difference between pilot and beta testing is that pilot testing is actually done using the product by the group of user before the final deployment and in beta testing we do not input real data, but it is installed at the end customer to validate if the product can be used in production.
what is the meaning of Code Walk Through?
Code Walk Through is the informal analysis of the program source code to find defects and verify coding techniques
what are the basic components of defect report format?
- Project Name
- Module Name
- Defect detected on
- Defect detected by
- Defect ID and Name
- Snapshot of the defect
- Priority and Severity status
- Defect resolved by
- Defect resolved on
what is the purpose behind doing end-to-end testing?
End-to end testing is done after functional testing. The purpose behind doing end-to-end testing is that
- To validate the software requirements and integration with external interfaces
- Testing application in real world environment scenario
- Testing of interaction between application and database
Explain what it means by test harness?
A test harness is configuring a set of tools and test data to test an application in various conditions, it involves monitoring the output with expected output for correctness.
Explain in a testing project what testing activities would you automate?
In a testing project testing activities you would automate are
- Tests that need to be run for every build of the application
- Tests that use multiple data for the same set of actions
- Identical tests that needs to be executed using different browsers
- Mission critical pages
- Transaction with pages that do not change in short time
How to Improve Software Quality Using Continuous Integration Process
Continuous integration (CI) is the real meat behind the CD process and is the reason that makes Continuous Delivery possible.
To understand CI, let’s take the terms at face value and deduce a basic definition. The first word means “ongoing” or “frequent” and the second “merged” or “made part of”. So CI is a process where something is being “merged”-“frequently”.
Logically the next question is: What is the something being merged and where is it merged?
image source.
Considering that CD is just the conceptual extension of CI, the answer is sort of obvious isn’t it?
The “something” that is being merged is code and the “where” is a repository or Version control.
Thus, Continuous integration is a process where code is checked into a repository very frequently.
Additionally, as discussed in the previous article, tests are run immediately to catch any errors each time a code check-in happens thereby setting up the necessary feedback loop for a Continuous Delivery.
Continuous Integration process
Let’s retrace back to the CD pipeline diagram discussed previously. The red circle will be our focus in this article in order to understand the CI process.
Even though the CI process may seem very development centric, it’s vital for QA engineers to get an overall picture and adapt accordingly.
Before we proceed any further, the following terminology is important to know:
- Source code or version control system has all the code related to a project/feature.
- Mainline: The most recent state of the code in a version control/source code system
- Local Copy: If you have had experience with working with an Eclipse-like IDE, you will know that you can import a project’s codebase into your local machine at a particular location. That location in your local system is the”local copy”.
- Check out: When a developer begins to work, the common practice is to import the latest source code onto the local copy. This activity is “check out”.
Say there are two or three development engineers working on a feature and are using Continuous Integration.
This is how the sequence of events would appear:
1) On the local copy, the developer builds his code for the new feature.
2) After coding is done, he might write the required automated unit tests to test his code.
3) A local build is run, to ensure that the newly added code doesn’t break anything.
4) Once the build succeeds the developer checks if any of his peers made any new check-ins.
5) In case there are new incoming changes, he has to first accept those incoming changes to make his local copy most current.
6) Synching with the mainline might result in some conflicts due to the newly merged local copies.
7) If a conflict arises, it is fixed so that the changes are in sync with the mainline code
8) After step 7, the code changes are ready to be checked in. This is called “code commit”.
9) After commit, another build pertaining to source code (mainline) is run on an integration system.
10) This now is ready to be consumed by the next stages.
CI Benefits
#1) Errors detected early: If it is an error in the local copy or code checked in directly without being synching to the mainline, a build failure will occur at the appropriate stage. It forces the developer to fix the bug before proceeding further. QA teams will also be benefited from this as they will mostly be working on stable builds.
#2) Decreases bug accumulation: Bugs are inevitable; however with the use of CI the piling of bugs is reduced greatly. Based on how effective the automation is, the bugs are easy to find early and fix, greatly reducing risks.
#3) Setting the stage for Continuous Delivery: CI reduces manual intervention because build, sanity, and other tests are all supposed to be automated. This paves way for a successful continuous delivery.
#4) Increased transparency: CI brings in a greater level of transparency into the overall Development and QA processes. There is, at all times, a clear indication of what tests are failing, causes for failure, defects, etc. enabling you to make factual decisions on where and how to improve efficiency.
#5) Cost effectiveness: Based on the above points of early error detection, less bug accumulation, more automation and clarity in the overall system, it is needless to say, that cost is optimized.
CI in QA – point of view
This is a natural and logical extension to our previous discussion. These factors have to be in place to be able to use CI in testing.
#1) Initial tests: We left off at a point where we now have a good build after a code commit. The code commit should trigger some automated tests – smoke or sanity checks certifying that the build is now ready for QA.
#2) Automation Framework: To stay true to CI, every QA team should invest in building a test automation framework that automatically triggers tests that uncover not only feature specific shortcomings but also identify framework enhancement requirements (for current and new tests).
#3) Parallel testing using Automation: A robust automation framework facilitates parallel testing and replication of production with various configurations. It also yields better test coverage and fewer bug escapes.
For instance, if support for two or more browsers on certain Operating Systems is required, then tests that simulate the needed configurations can be set up and run in parallel, thereby drastically increasing test efficiency.
#4) Automation across different types of testing: Continuing the previous point, automated test coverage should include different types of testing – functional and non-functional tests – such as stress, load, performance, regression, database, acceptance, etc.
------------
#5) Bugs: This is particularly interesting because even the logging of bugs can be automated with a CI system! You can poll for certain kinds of errors coming up in the logs and on encountering them, a bug is automatically logged. A typical example of this situation is auto-logging bugs for nullPointerExceptions observed in a server trace log.
Continuous Integration Tools
It is dependent on each organization on what kind of CI tools they use. While some of the tools like Hudson, CruiseControl, Jenkins are popular choices, there are many other tools that provide similar capabilities in addition to their own unique features.
The decision of choosing a CI tool depends on a lot of factors, such as:
- Being able to integrate with configuration management tools
- Easy, customizable reporting
- Integration with automation tools, etc.
Here is the list of CI tools with features.
CI Implementation and Best practices
Continuous Integration aims to have a drastic drop in the degree of errors during software development through feedback mechanisms, automation, and quick bug fix turnaround.
Although it may seem too ambitious for a process to achieve all of this, it can certainly be a reality with some of the continuous integration best practices described below:
#1) Shared repository to maintain code: With Agile evolving rapidly, it is a given that there are multiple developers working on different or same features of a product. It is therefore absolutely necessary to have one repository that will be able to capture the timeline of changes that all developers are making.
Version or source control tools help you create different development streams and better prepare the teams to be able to respond to these needs. They also help in keeping all the artifacts needed to perform a complete build (libraries, properties, environment variables, test scripts, etc.) in one place.
#2) Trigger automated builds through CLI: Builds need to be automated to a point that they can be triggered through a CLI.
For example, you can use ANT and Maven to spin a build. This means that one should be able to connect to the build server, load the required assets from an online or local repository, compile and run the whole build with a simple Sometimes in a large build, some of the artifacts would have already been downloaded as part of a previous build. Therefore, the build tool must be able to gauge and download only the needed resources.
A build may successfully compile but that doesn’t mean it would be suitable for testing. Therefore, it’s also important to have some tests incorporated as part of the build, in order to discover any obvious bugs early.
#3) Frequent code-commits: The beauty of this system is that even with multiple developers, code conflicts are easily caught. This is because each developer has to update his local copy before he can commit. If this is done daily or as often as possible: The more commits = the more updated local copy is with the mainline. The more updated the local copy, fewer conflicts. Fewer conflicts = stability!
#4) Quick Build time: A longer build time defies the whole purpose of Continuous Integration because it won’t be possible to get ongoing fast feedback. Secondly, frequent code commits will become increasingly difficult. How do we combat this? This takes us to the next best practice of having Staged builds.
#5) Staging builds: In order to expedite the build process, the build pipeline could be broken down into smaller chunks and executed in parallel.
#6) Run mainline build on integration machine: In the continuous Integration process, we had talked about running a second build pertaining to the mainline code. This build happens on an integration machine. You might wonder why? As the testers, we encounter situations where bugs are seen only on a particular environment and not in another. This is exactly the reason a mainline build is run on an integration machine.
Sometimes integration machine will spring up surprises that didn’t exist in a developer’s local system. Human errors such as not synching your code with the mainline will also show up here. Therefore only once it builds successfully here, the commit can be declared
#7) Create a duplicate of Production System: As the testers, we’re so familiar with environment related defects. Production systems have their own configurations in terms of database levels, Operating system, OS patches, libraries, networking, storage, etc.
It is a good practice to have the test environment as close as possible to prod system, if not an exact replica. Any major discrepancies and risks can be easily identified this way before it actually hits production systems.
#8) Automating Deployment: In order to get to the point of running different kinds of tests in a CI model, the test setup has to be done as a prelude. As a best practice, you could have scripts that automatically setup the needed runtime environments for testing.
#9) Publish build locations: With frequent builds getting spun, it is important to make all the consumers of the build know where the latest build can be found. A repository where builds are published can be shared with all those involved. Likewise, build and initial sanity results should also be published, so that people can see what integrations or fixes have gone in.
Continuous Delivery: How to Have the Reliable Software Releases to Production at Any Time
Software development has seen a steep outlook and approach difference to keep up with the current market trends and consumer needs. While the traditional waterfall approach was more sequential and planned, it has setbacks in terms of satisfying customer expectations of the final product.
img source
The agile methodology then came as the perfect answer as it ensured functionality/feature installments to be delivered at the end of short development cycles or sprints. Soon it became the way of life.
However, to take Agile to the next level, it is essential to test the software almost around the same time as it is built. This method of “Shift left” or “Early testing” drastically boosts an accuracy of the software and makes it less error prone.
With this came a need for CD and CI processes to be intricately carved into Agile. It is these processes that provide the needed momentum for the build, test and release activities to function seamlessly.
What is Continuous Delivery?
Martin Fowler of ThoughtWorks defines Continuous Delivery as “A software development discipline where you build software in such a way that the software can be released to production at any time.”
Based on this, here are some obvious conclusions:
- Software being built is heavily dependent on a process which has to be thorough, accurate and repeatable.
- Since the software should always be production-ready, it is imperative that it has to be more user-centric. How do you do this? Take feedback from the users as early as possible.
At the heart of CD lies Continuous Integration which enables the users to always have a build with the latest code changes. This means that developers must put in their code changes frequently, implying frequent builds. Thus, CD consumes Continuous Integration and is a superset of it.
Implementing Continuous Delivery:
Below is a general high-level overview of CD implementation.
#1) Automation: Processes where accuracy is the need of the hour certainly demand a high degree of automation. This is also in line with basic software quality principles that recommend investing in automating tests for tasks that are repetitive in nature.
Continuous Delivery almost thrives on and encourages automation in every possible area, whether it is gathering requirements or slightly difficult, deployment. Setting up automation help is the first step to achieving the desired reliability for CD.
#2) Version Control System: In a fast paced project methodology there would be several people working on the same code or project asset simultaneously. This necessitates a version control system in place to make it easier to track any changes and view the modifications by a team member.
#3) End to end ownership: CD completely breaks free from the shackles of working in silos. This means a developer’s responsibility doesn’t end at checking in his code. Likewise, a QA engineer doesn’t stop at testing scenarios, opening bugs and retesting them at just the code level. Developers, testers, project managers and everyone involved has to collectively strive for the correct functioning of the delivered software.
#4) Quality Metrics: Measuring success and establishing benchmarks are instrumental in achieving the needed success for a CD process. When analytics and metric collection are in place, problematic areas will easily show up. The recovery can then be identified based on the said problems.
#5) DevOps: This is probably the most important and fundamental thing to focus on when implementing a CD process. DevOps brings the unification of two isolated entities in SDLC, development and operations.
Right from setting up environments, configuring them, managing a part of the build process and testing build deployments DevOps drives a big change in perspective. This kind of collaboration ensures rapid deployment. Therefore, it is safe to say that it forms a stepping stone for a successful CD process.
#6) Enhancements and Improvisation: Identify how the CD process can be continuously improved, easily maintained and kept up to date. In case of any failures anywhere in the process, it is absolutely better to nip it in the bud than have a temporary fix or workaround.
Continuous Delivery Pipeline
Below is a diagrammatic representation of the overall working of the CD process.
(Click on image for enlarged view)
Please note that:
- Every juncture in the pipeline, starting with check-in to production deployment, has an arrow pointing back to the previous block. This only goes to reiterate how much the CD process is heavily reliant on feedback.
- Say a developer checks new code into the source control, CD process kicks in where a build is spun and sanity tests are triggered. Only if the build is deemed to be a “good build”, it is passed on to the next Quality check stage.
- This is where the QA team comes into the picture and runs the required tests. They could be basic functional verifications or non-functional stress, load, performance tests, etc.
- If at any point there is a failure or noncompliance, it is flagged and will have to be fixed for the CD pipeline to proceed further.
- Finally, after that particular build is deemed to have the quality approval, it is deployed to production.
Role of QA in CD: Continuous Testing
From the CD pipeline above, we can see how Quality Assurance surfaces in the whole scheme of things. Now let’s get a deeper understanding what CD means from a QA perspective and how a whole new approach to QA has to be embraced.
In a traditional testing model, there is a time gap between a developer checking in the code and tester receiving the build containing this code. However in CD, new code gets integrated quite frequently and continuously. Therefore, it is an implicit understanding that the testing has to be accelerated for it to keep up to being “continuous”.
How is this possible?
The answer is obvious and what we’ve been discussing – Automation! Test Automation is the key to successful testing in a CONTINUOUS DELIVERY process – the more, the better; on all levels.
------------
Secondly, function drops may happen within a couple of days due to being delivered in small installments. This means that there may not be adequate time for full test planning. The delivered installment may not even have a GUI at the moment. Therefore, the testing approach has to be re-factored and has to undergo a drastic change.
What does this imply to a tester?
The strict role definition of a developer and tester needs to be overcome almost blurring the lines between their skills. Testers have to be familiar with what is being developed right from the beginning, gain the required skills and likewise, developers will need to know how to test.
Factors that form a reference framework to enable continuous testing
#1) Applying reasoning and questioning:
In addition to testers upgrading their technical skills and getting involved as early as possible, a tester has to apply analytical reasoning about the product being built. This means testers must have the ability to:
- Look at the bigger picture
- Understand the real business usage
- Decipher and determine if the captured requirements are in the right direction
- Gauge the environment and setup needed to facilitate testing of scenarios identified
- Decide how different tests need to be carried out – functional, nonfunctional, accessibility, security, etc.
- Figure out how to maximize automation at all
This takes us right to the next point, strategizing test effort.
#2) Test strategy creation:
Deriving the model that ensures extensive testing to filter out any potential issues at the various level of development will ensure a steep quality rise. This is defined as “layering of tests” and includes the development arena too.
The more layers you add, the higher the quality is.
Let’s understand this better with an example:
Consider the continuous testing model below. Now whenever a code change happens, it will pass through the layers.
- Typically, development will run some automated unit tests before committing their changes to ensure that their code is in a suitable shape. In this model, sometimes development also borrows happy path tests from QA and runs them additionally. This constitutes the first block in the model.
- After a change is committed in version control, CI runs extensive automated tests in the next layer.
- Now, if there are any failures in the previous step, it is a good idea to add another layer before the code is passed on to the test team. This explains the dashed box in the third layer. Let’s call it feedback box. Based on the failures in the previous step, one can run some manual tests to check sanity, analyze the coverage of the automated tests and add additional tests to maximize coverage here.
- The next layer is where QA takes over. Validating functionality, performance tests, etc. are included.
- After the previous layer establishes some confidence in terms of the stability of the features and that most bugs are fixed, the next layer to include acceptance tests can be added to establish further confidence.
- Finally, when the software is deployed to production, it should again be monitored proactively for any bugs that might come up.
#3) Improvising the process:
In a CD environment, you have to keep improving the entire testing process in order to have an edge. This can be as small as having a good communication channel or as eminent as improving test scenarios, test coverage, test tools, etc.
#4) Upgrading technical skills, test tools usage:
It has become necessary for testers to take their role to the next level and upgrade their technical knowledge by understanding the architecture, the code and component dependencies. It is also an absolute must for testers to have coding knowledge so that they can add more value to automation. Upgrading skills must also be extended to the usage of test tools which will help the automation and general testing effort in CD.
Continuous Delivery Tools:
Since automation forms the crux of CD (not just functional test automation, but an end to end automation of the pipeline), there are tools used at every stage of the process, up until deployment. These tools, such as Version control, unit testing, CI, build automation, quality check etc, help accomplish the objectives of each stage in a comprehensive way.
Here is a list of some continous delivery tools .
Benefits of CD:
After the above discussion, it is automatically understood that there are many benefits to be derived from CD.
Here are the most striking ones:
- Less Error Prone: Since small pieces of code get integrated frequently, the chances of any error being missed are reduced by a great extent.
- Quick Feedback/ Resolution time: Because of early feedback integrated into CD, there is higher product quality in general. This also helps in delivering a product that is actually useful to the end user, thereby being more customers
- Fast time to market: CD helps in coping up with an ever changing market and consumer needs through frequent and quick releases. The turnaround time to convert an idea or concept into reality is remarkably fast.
What is Technical Debt and Why QA Testers Should be Concerned About It?
It represents the effort needed to fix the issues/defects that remain in the code when an application is released. In simple words – it’s the difference (in terms of bugs) between what is expected and what is delivered.
When a development team is busy working on a project and fixing bugs unfortunately, many new bugs appear. Out of these, some are fixed and some are differed for later release. When this differing of issues goes on increasing, at one point it becomes really difficult to release the product on time without any issues. This is the worst consequence of Technical debt if it’s not tackled on time.
When a development team is busy working on a project and fixing bugs unfortunately, many new bugs appear. Out of these, some are fixed and some are differed for later release. When this differing of issues goes on increasing, at one point it becomes really difficult to release the product on time without any issues. This is the worst consequence of Technical debt if it’s not tackled on time.
Why QA Teams suffer the most due to Technical Debt
During a typical software design & development cycle, there are several things that can lead to a “technical debt” like situation–improper documentation, inadequate testing and bug fixing, lack of coordination between teams, legacy code and delayed refactoring, absence of continuous integration and other out of control factors.
For example, it has been observed that code duplication efforts can lead to anything between 25 to 35% extra work.
However, nowhere are the challenges due to technical debt more evident than in QA testing where test teams have to meet unexpected deadlines and everything may be thrown out of gear.
How often have your testers faced quandaries at the last moment when unexpectedly, the delivery manager came and told them, “Team! We have to roll out our product in a week’s time, sorry about not being able to communicate this in time. Please finish with all test tasks urgently so that we can be ready with the demo.”
Basically, any missed tests or “solve it later” approach can lead to a tech debt like problem. Lack of test coverage, oversized user stories, short sprints, and other examples of “cutting corners” due to delivery pressure play a huge role behind the accumulation of technical debt in QA practice.
Difference Between Test Plan, Test Strategy, Test Case, Test Script, Test Scenario and Test Condition?
What is difference between Test plan and Test strategy?
Test plan is a term and a deliverable. Test plan is a document that lists all the activities in a QA project, schedules them, defines the scope of the project, roles & responsibilities, risks, entry & exit criteria, test objective and anything else that you can think of. Test plan is as I like to call a ‘super document’ that lists everything there is to know and need.
This is also a deliverable and also a document at that. Test strategy outlines the testing approach and everything else that surrounds it. It is different from the test plan, in the sense that a Test strategy is only a sub set of the test plan. It is a hard core test document that is to an extent generic and static. There is also an argument about at what levels test strategy or plan is used- but I really do not see any discerning difference.
Example: Test plan gives the information of who is going to test at what time. For example: Module 1 is going to be tested by “X tester”. If tester Y replaces X for some reason, the test plan has to be updated.
On the contrary, test strategy is going to have details like – “Individual modules are to be tested by test team members. “ In this case, it does not matter who is testing it- so it’s generic and the change in the team member does not have to be updated, keeping it static.
What is difference between Test case and Test script?
In my opinion, these two terms can be used interchangeable. Yes, I am saying there is no difference. Test case is a sequence of steps that help us perform a certain test on the application. Test script is the same thing.
Now, there is one school of thought that test case is a term used in the manual testing environment and test script is used in an automation environment. This is partly true, because of the comfort level of the testers in the respective fields and also on how the tools refer to the tests (some call test scripts and some call them test cases). So in effect, test script and test case both are steps to be performed on an application to validate its functionality whether manually or through automation.
What is difference between Test scenario and Test condition?
This is a one line pointer that testers create as an initial, transitional step into the test design phase. This is mostly a one line definition of “What” we are going to test with respect to a certain feature. Usually, test scenarios are an input for the creation of test cases. In agile projects, Test scenarios are the only test design outputs and no test cases are written following these. A test scenario might result in multiple tests.
------------
Examples test scenarios:
1. Validate if a new country can be added by the Admin
2. Validate if an existing country can be deleted by the admin
3. Validate if an existing country can be updated
2. Validate if an existing country can be deleted by the admin
3. Validate if an existing country can be updated
Test conditions on the other hand are more specific. It can be roughly defined as the aim/goal of a certain test.
Example test condition:
In the above example, if we were to test the scenario 1, we can test the following conditions:
1. Enter the country name as “India”(valid )and check for the addition of the country
2. Enter a blank and check if the country gets added.
In each case, the specific data is described and the goal of the test is much more precise.
In the above example, if we were to test the scenario 1, we can test the following conditions:
1. Enter the country name as “India”(valid )and check for the addition of the country
2. Enter a blank and check if the country gets added.
In each case, the specific data is described and the goal of the test is much more precise.
What is difference between Test procedure and Test suite?
Test procedure is a combination of test cases based on a certain logical reason, like executing an end-to end situation or something to that effect. The order in which the test cases are to be run is fixed.
For example, if I was to test the sending of an email from Gmail.com, the order of test cases that I would combine to form a test procedure would be:
1. The test to check the login
2. The test to compose email
3. The test to attach one/more attachments
4. Formatting the email in the required way by using various options
5. Adding contacts or email addresses to the To, BCC, CC fields
6. Sending email and making sure it is showing in the “Sent Mail” section
2. The test to compose email
3. The test to attach one/more attachments
4. Formatting the email in the required way by using various options
5. Adding contacts or email addresses to the To, BCC, CC fields
6. Sending email and making sure it is showing in the “Sent Mail” section
All the test cases above are grouped to achieve a certain target at the end of them. Also, test procedures have a few test cases combined at any point of time.
Test suite, on the other hand, is the list of all the test cases that have to be executed as a part of a test cycle or a regression phase etc. There is no logical grouping based on functionality. The order in which the constituent test cases get executed may or may not be important.
Example of test suite: If an application’s current version is 2.0. The previous version 1.0 might have had 1000 test cases to test it entirely. For version 2 there is 500 test cases to just test the new functionality that is added in the new version. So, the current test suite would be 1000+500 test cases that include both regression and the new functionality. The suite is a combination too, but we are not trying to achieve a target function.
Subscribe to:
Posts (Atom)