Sunday, November 18, 2007

How to explain difference between v&v give the example

Have you ever prepared an "Egg Omelette"! I am assuming that you have prepared. So how will you prepare an "Egg Omelette"? You will try to get all the ingredients required for making an "Egg Omelette" like - eggs, oil, salt, onion, and the other ingredients if any! Then what will you do? You will "verify" the ingredients and see if they are of good enough "quality" and in proper "quantity". Won't you? Here you are doing Verification.

Now let me assume that you have prepared "Egg Omelette". Now will you directly serve it to your guests? Or you would like to "taste" it to see if it has been prepared well? A good cook will choose to taste the dish before serving it to guests. This is comparable to Validation.

Friday, November 16, 2007

Difference between Smoke testing or Sanity testing

Sanity Testing:
Sanity testing is used to test in intial check up is made on the build after recieving from developer.
Sanity Testing:When a major bug is filed in a build,developers will fix that particular bug alone and releases a new build ,testing the new build to check the particular issue is fixed and fixture has not caused any impact on other functionality,is known as "Sanity Testing"

Smoke Testing
In this we test that application is able for the further testing or not. We just check the capability of the application for testing.
We can check following things in this:
i)Build is properly installed on the system( because sometimes application crash during installation of the application).
ii)Installed application properly connected with the database and network.
iii)All the module are display and working fine.

Thursday, November 1, 2007

Boundary value testing

is a technique to find whether the application is accepting the expected range of values and rejecting the values which falls out of range.
Ex. A user ID text box has to accept alphabet characters ( a-z ) with length of 4 to 10 characters.
BVA is done like this, max value:10 pass; max-1: 9 pass;
max+1=11 fail ;min=4 pass;min+1=5 pass;min-1=3 fail;
Like wise we check the corner values and come out with a conclusion whether the application is accepting correct range of values.

Thursday, September 20, 2007

Difference between RTM and TRM

RTM (Requirment Tracebility Matrix): means mapping between requirements and testcases to check whether all testcases are covered or not


TRM (Test Responsibility Matrix): meanswhich type oftesting technique has to be chosen to corresponding usecase

Saturday, September 8, 2007

Difference between use case and test case

USE CASE:is a functional and system requirement of how a user uses the system being designed to perform a task.it provdes a powerful communication between customer,developer and tester...


TEST CASE:
test cases r written on the basis of use case documents..that describes an input action or event and expected response to determine if a feature of an application is working correctly

Saturday, September 1, 2007

Relationship between SDLC or STLC

Stage 1: When Contract is signed or Organization got a project from the customer, by refering to SOW(Statement of Work) or Requirements the "UAT(User Acceptance Test) Plan is preparedand the same under goes reviews(Verification).

Stage 2. (Requirement Gathering):- Once requirement gathering or study is over the "SRS(Software Requirement Specification)" is prepared and it's reviewed(Verfication) and once SRS is finalised STP(Software Test Plan) is prepared and the same under goes reviews(Verification).

Stage 3. (High Level Design):- Taking SRS as input, HLDD (High Level Design Document) is prepared and same under goes Reviews(Verification) . Once HLDD is ready, refering the same ITP(Integration Test Plan) is prepared and the same under goes reviews(Verification).

Stage 4. (Low Level Design) :- Taking HLDD as input LLDD(Low level Design Document) is preparedand same under goes Reviews(Verification) .Once HLDD is ready, refering the same UTP(Unit Test Plan) is prepared and the same under goes reviews(Verification).

Stage 5. Coding(Impelementation):- Taking LLDD (Low level Design Document) coding will done and coding also under goes reviews(Verification).
This is all about Verification Part , Now on wards the Validation or Actual Testing starts

Stage 6. Unit Test :-Once coding of unit(s) is done , unit testing(Validation)w will be carried out by refering to the test cases (present in UTP).

Stage 7. Integration Test :- once all the units get tested, Integration testing (Validation)will be carried by refering to test cases (present in ITP).
CONTINUED ------------->

Monday, August 20, 2007

Difference between defect,error,bug,failure and fault

Error : A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. See: anomaly, bug, defect, exception, and fault

Failure: The inability of a system or component to perform its required functions within specified performance requirements. See: bug, crash, exception, fault.

Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner. See: anomaly, defect, error, exception, fault.

Fault: An incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated manner. See: bug, defect, error, exception.

Defect: Mismatch between the requirements.

Monday, July 30, 2007

Difference between Static or Dynamic testing?

Static Testing:
The Verification activities fall into the category of Static Testing. During static testing, you have a checklist to check whether the work you are doing is going as per the set standards of the organization. These standards can be for Coding, Integrating and Deployment. Review's, Inspection's and Walkthrough's are static testing methodologies.


Dynamic Testing:
Dynamic Testing involves working with the software, giving input values and checking if the output is as expected. These are the Validation activities. Unit Tests, Integration Tests, System Tests and Acceptance Tests are few of the Dynamic Testing methodologies.

What is smoke testing?

Smoke Testing:
Smoke testing is a relatively simple check to see whether the product "smokes" when it runs. Smoke testing is also known as ad hoc testing, i.e. testing without a formal test plan.
With many projects, smoke testing is carried out in addition to formal testing. If smoke testing is carried out by a skilled tester, it can often find problems that are not caught during regular testing. Sometimes, if testing occurs very early or very late in the software development cycle, this can be the only kind of testing that can be performed. Smoke tests are, by definition, not exhaustive, but, over time, you can increase your coverage of smoke testing. A common practice at Microsoft, and some other software companies, is the daily build and smoke test process. This means, every file is compiled, linked, and combined into an executable file every single day, and then the software is smoke tested. Smoke testing minimizes integration risk, reduces the risk of low quality, supports easier defect diagnosis, and improves morale. Smoke testing does not have to be exhaustive, but should expose any major problems. Smoke testing should be thorough enough that, if it passes, the tester can assume the product is stable enough to be tested more thoroughly.

Without smoke testing, the daily build is just a time wasting exercise. Smoke testing is the sentry that guards against any errors in development and future problems during integration.
At first, smoke testing might be the testing of something that is easy to test. Then, as the system grows, smoke testing should expand and grow, from a few seconds to 30 minutes or more.

Monday, July 23, 2007

Difference between varification and validation

verification:

Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.

validation:

Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed

Friday, July 20, 2007

Define the Severity and Priority

Severity: Severity determines the defect's effect on the application. Severity is given by Testers
Priority: Determines the defect urgency of repair.Priority is given by Test lead or project manager

1. High Severity & Low Priority : For example an application which generates some banking related reports weekly, monthly, quarterly & yearly by doing some calculations. If there is a fault while calculating yearly report. This is a high severity fault but low priority because this fault can be fixed in the next release as a change request.


2. High Severity & High Priority : In the above example if there is a fault while calculating weekly report. This is a high severity and high priority fault because this fault will block the functionality of the application immediately within a week. It should be fixed urgently.


3. Low Severity & High Priority :
If there is a spelling mistake or content issue on the homepage of a website which has daily hits of lakhs. In this case, though this fault is not affecting the website or other functionalities but considering the status and popularity of the website in the competitive market it is a high priority fault.


4. Low Severity & Low Priority : If there is a spelling mistake on the pages which has very less hits throughout the month on any website. This fault can be considered as low severity and low priority.

Priority is used to organize the work. The field only takes meaning when owner of the bug
P1 Fix in next build
P2 Fix as soon as possible
P3 Fix before next release
P4 Fix it time allow
P5 Unlikely to be fixed
Default priority for new defects is set at P3

Thursday, July 19, 2007

Test cases for Mobile Phone

Test Cases for Mobile Phone:

1)Chek whether Battery is inserted into mobile properly

2)Chek Switch on/Switchoff of the Mobile

3)Insert the sim into the phone n chek

4)Add one user with name and phone number in Address book

5)Chek the Incoming call

6)chek the outgoing call

7)send/receive messages for that mobile

8)Chek all the numbers/Characters on the phone working fine by clicking on them..

9)Remove the user from phone book n chek removed properly with name and phone number

10)Chek whether Network working fine..

11)If its GPRS enabled chek for the connectivity.

Wednesday, July 18, 2007

STLC(Software Testing Life Cycle)

Order of STLC:
  • Test Strategy
  • what is the purpose of software testing's - Bug removal, System

    Test Plan

  • what is the purpose of software testing's - Bug removal, System

    Test Scenario

what is the purpose of software testing's - Bug removal, System
  • Test Case
what is the purpose of software testing's - Bug removal, System

Test Strategy:hat is the purpose of software testing's - Bug removal, System

Test Strategy is a Document, developed by the Project manager, which contains what type of technique to follow and which module to test.

what is the purpose of software testing's - Bug removal, System

Test Plan:what is the purpose of software testing's - Bug removal, System

Test plan is a Document, developed by the Test Lead, which contains "What to Test","How to Test", "When to Test", "Who to Test".

what is the purpose of software testing's - Bug removal, System

what is the purpose of software testing's - Bug removal, System

Test Scenario:

A name given to Test Cases is called Test Scenario. These Test Scenario was deal by the Test Enggineer.

what is the purpose of software testing's - Bug removal, System

Test Cases:

It is also document and it specifies a Testable condition to validate a functionality. These Test Cases are deal by the Test Enggneer

Monday, July 16, 2007

What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help?

* SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes.

* CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'), developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors.

* Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place; successes may not be repeatable.

* Level 2 - software project tracking, requirements management, realistic planning, and configuration management processes are in place; successful practices can be repeated.

* Level 3 - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance.

* Level 4 - metrics are used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high.

* Level 5 - the focus is on continouous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.

* Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations was 100 software engineering/maintenance personnel; 32% of organizations were U.S. federal contractors or agencies. For those rated at Level 1, the most problematical key process area was in Software Quality Assurance.

Write a Test Cases on Fan

Test Cases On Fan:
1.It should have a hook for hanging in the roof.
2. it should have minium three blades.
3. If should be moving once the electricty pass into it.
4. Speed of the fan should be controlled by the regulator.
5.It should be stop once the electric switch off.
6. The fan should run with minimum noise.
7. The blades should have proper distance from the ceiling.
8. The fan while in motion, should not vibrate.
9. The color of the fan should be dark.

Sunday, July 15, 2007

Cyclomatic Complexity

Cyalomatic Complexity is part of software metrics,by using this the logical complexity of an application can be measured