RTM (Requirment Tracebility Matrix): means mapping between requirements and testcases to check whether all testcases are covered or not
TRM (Test Responsibility Matrix): meanswhich type oftesting technique has to be chosen to corresponding usecase
Thursday, September 20, 2007
Saturday, September 8, 2007
Difference between use case and test case
USE CASE:is a functional and system requirement of how a user uses the system being designed to perform a task.it provdes a powerful communication between customer,developer and tester...
TEST CASE:test cases r written on the basis of use case documents..that describes an input action or event and expected response to determine if a feature of an application is working correctly
TEST CASE:test cases r written on the basis of use case documents..that describes an input action or event and expected response to determine if a feature of an application is working correctly
Saturday, September 1, 2007
Relationship between SDLC or STLC
Stage 1: When Contract is signed or Organization got a project from the customer, by refering to SOW(Statement of Work) or Requirements the "UAT(User Acceptance Test) Plan is preparedand the same under goes reviews(Verification).
Stage 2. (Requirement Gathering):- Once requirement gathering or study is over the "SRS(Software Requirement Specification)" is prepared and it's reviewed(Verfication) and once SRS is finalised STP(Software Test Plan) is prepared and the same under goes reviews(Verification).
Stage 3. (High Level Design):- Taking SRS as input, HLDD (High Level Design Document) is prepared and same under goes Reviews(Verification) . Once HLDD is ready, refering the same ITP(Integration Test Plan) is prepared and the same under goes reviews(Verification).
Stage 4. (Low Level Design) :- Taking HLDD as input LLDD(Low level Design Document) is preparedand same under goes Reviews(Verification) .Once HLDD is ready, refering the same UTP(Unit Test Plan) is prepared and the same under goes reviews(Verification).
Stage 5. Coding(Impelementation):- Taking LLDD (Low level Design Document) coding will done and coding also under goes reviews(Verification).
This is all about Verification Part , Now on wards the Validation or Actual Testing starts
Stage 6. Unit Test :-Once coding of unit(s) is done , unit testing(Validation)w will be carried out by refering to the test cases (present in UTP).
Stage 7. Integration Test :- once all the units get tested, Integration testing (Validation)will be carried by refering to test cases (present in ITP).
CONTINUED ------------->
Stage 2. (Requirement Gathering):- Once requirement gathering or study is over the "SRS(Software Requirement Specification)" is prepared and it's reviewed(Verfication) and once SRS is finalised STP(Software Test Plan) is prepared and the same under goes reviews(Verification).
Stage 3. (High Level Design):- Taking SRS as input, HLDD (High Level Design Document) is prepared and same under goes Reviews(Verification) . Once HLDD is ready, refering the same ITP(Integration Test Plan) is prepared and the same under goes reviews(Verification).
Stage 4. (Low Level Design) :- Taking HLDD as input LLDD(Low level Design Document) is preparedand same under goes Reviews(Verification) .Once HLDD is ready, refering the same UTP(Unit Test Plan) is prepared and the same under goes reviews(Verification).
Stage 5. Coding(Impelementation):- Taking LLDD (Low level Design Document) coding will done and coding also under goes reviews(Verification).
This is all about Verification Part , Now on wards the Validation or Actual Testing starts
Stage 6. Unit Test :-Once coding of unit(s) is done , unit testing(Validation)w will be carried out by refering to the test cases (present in UTP).
Stage 7. Integration Test :- once all the units get tested, Integration testing (Validation)will be carried by refering to test cases (present in ITP).
CONTINUED ------------->
Monday, August 20, 2007
Difference between defect,error,bug,failure and fault
Error : A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. See: anomaly, bug, defect, exception, and fault
Failure: The inability of a system or component to perform its required functions within specified performance requirements. See: bug, crash, exception, fault.
Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner. See: anomaly, defect, error, exception, fault.
Fault: An incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated manner. See: bug, defect, error, exception.
Defect: Mismatch between the requirements.
Failure: The inability of a system or component to perform its required functions within specified performance requirements. See: bug, crash, exception, fault.
Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner. See: anomaly, defect, error, exception, fault.
Fault: An incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated manner. See: bug, defect, error, exception.
Defect: Mismatch between the requirements.
Monday, July 30, 2007
Difference between Static or Dynamic testing?
Static Testing:
The Verification activities fall into the category of Static Testing. During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Review's, Inspection's and Walkthrough's are static testing methodologies.
Dynamic Testing:
Dynamic Testing involves working with the software, giving input values and checking if the output is as expected. These are the Validation activities. Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies.
The Verification activities fall into the category of Static Testing. During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Review's, Inspection's and Walkthrough's are static testing methodologies.
Dynamic Testing:
Dynamic Testing involves working with the software, giving input values and checking if the output is as expected. These are the Validation activities. Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies.
What is smoke testing?
Smoke Testing:
Smoke testing is a relatively simple check to see whether the product "smokes" when it runs. Smoke testing is also known as ad hoc testing, i.e. testing without a formal test plan.
With many projects, smoke testing is carried out in addition to formal testing. If smoke testing is carried out by a skilled tester, it can often find problems that are not caught during regular testing. Sometimes, if testing occurs very early or very late in the software development cycle, this can be the only kind of testing that can be performed. Smoke tests are, by definition, not exhaustive, but, over time, you can increase your coverage of smoke testing. A common practice at Microsoft, and some other software companies, is the daily build and smoke test process. This means, every file is compiled, linked, and combined into an executable file every single day, and then the software is smoke tested. Smoke testing minimizes integration risk, reduces the risk of low quality, supports easier defect diagnosis, and improves morale. Smoke testing does not have to be exhaustive, but should expose any major problems. Smoke testing should be thorough enough that, if it passes, the tester can assume the product is stable enough to be tested more thoroughly.
Without smoke testing, the daily build is just a time wasting exercise. Smoke testing is the sentry that guards against any errors in development and future problems during integration.
At first, smoke testing might be the testing of something that is easy to test. Then, as the system grows, smoke testing should expand and grow, from a few seconds to 30 minutes or more.
Smoke testing is a relatively simple check to see whether the product "smokes" when it runs. Smoke testing is also known as ad hoc testing, i.e. testing without a formal test plan.
With many projects, smoke testing is carried out in addition to formal testing. If smoke testing is carried out by a skilled tester, it can often find problems that are not caught during regular testing. Sometimes, if testing occurs very early or very late in the software development cycle, this can be the only kind of testing that can be performed. Smoke tests are, by definition, not exhaustive, but, over time, you can increase your coverage of smoke testing. A common practice at Microsoft, and some other software companies, is the daily build and smoke test process. This means, every file is compiled, linked, and combined into an executable file every single day, and then the software is smoke tested. Smoke testing minimizes integration risk, reduces the risk of low quality, supports easier defect diagnosis, and improves morale. Smoke testing does not have to be exhaustive, but should expose any major problems. Smoke testing should be thorough enough that, if it passes, the tester can assume the product is stable enough to be tested more thoroughly.
Without smoke testing, the daily build is just a time wasting exercise. Smoke testing is the sentry that guards against any errors in development and future problems during integration.
At first, smoke testing might be the testing of something that is easy to test. Then, as the system grows, smoke testing should expand and grow, from a few seconds to 30 minutes or more.
Monday, July 23, 2007
Difference between varification and validation
verification:
Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.
validation:
Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed
Subscribe to:
Comments (Atom)
