Thursday 19 May 2011

ISTQB


 Manual Testing Faqs
September 24, 2007 by Sridhar Lukka
What is ‘Software Quality Assurance’? 

1)Software QA involves the entire software development PROCESS 2) monitoring and improving the process,3) standards and procedures are followed4) ensuring that problems are found and dealt with. It is oriented to‘prevention’.  

What is ‘Software Testing’?     
           
Testing involves operation of a system or application under controlledconditions and evaluating the results (e.g., ‘if the user is in interface A of the application while using hardware B, and does C, then D should happen’). The controlled conditions should include both normal and abnormalconditions. Testing should intentionally attempt to make things go wrongto determine if things happen when they shouldn’t or things don’t happen when they should. It is oriented to ‘detection’.

  Why does software have bugs?      

·                        Miscommunication or no communication
·                        Software complexity
·                        Programming errors
·                        Changingrequirements
·                        timepressures
·                        egos
·                        poorly documented code ·  

        
What is Verification?

“Verification” checks whether we are building the right system, and Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings.  


What is Validation? 

“Validation” checks whether we are building the system right. Validation typically involves actual testing and takes place after verifications are completed. 

 What is a ‘walkthrough’? 

A ‘walkthrough’ is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.  

What’s an ‘inspection’? 

An inspection is more formalized than a ‘walkthrough’, typically with 3-8 people including a moderator, reader (the author of whatever is being reviewed), and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what’s missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost-effective methods of ensuring quality. Employees who are most skilled at inspections are like the ‘eldest brother’ in the parable in ‘Why is it often hard for management to get serious about quality assurance? Their skill may have low visibility but they are extremely valuable to any software development organization, since bug prevention is far more cost effective than bug detection. 

 What kinds of testing should be considered?  

·         Black box testing – not based on any knowledge of internal design or code. Tests are based on requirements and functionality.  
·         White box testing – based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths, conditions. 

 ·         Unit testing – the most ‘micro‘ scale of testing; to test particularfunctions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code, may require developing test driver modules or test harnesses.  
·         Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers. 
 ·         Integration testing – testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.  
·         Functional testing – Black box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn’t mean that the programmers shouldn’t check that their code works before releasing it (which of course applies to any stage of testing.)  
·         System testing - Black box type testing that is based on overall requirements specifications; covers all combined parts of a system. 
 ·         end-to-end testing – similar to system testing; the ‘macro’ end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. 
 ·         Sanity testing – typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state.  
·         regression testing – re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing. 
 ·         acceptance testing – final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
 ·         load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails. 
 ·         stress testing – term often used interchangeably with ‘load’ and ‘performance’ testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc. 
 ·         Performance testing – term often used interchangeably with ‘stress’ and ‘load’ testing. Ideally ‘performance’ testing (and any other ‘type’ of testing) is defined in requirements documentation or QA or Test Plans.  
·         Usability testing - testing for ‘user-friendliness’. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.  
·         Install/uninstall testing – testing of full, partial, or upgrade install/uninstall processes.  
·         Recovery testing – testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.  
·         Security testing – testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
  ·         Compatability testing – testing how well software performs in a particular hardware/software/operating system/network/etc. environment.  
·         Exploratory testing – often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.  
·         Ad-hoc testing – similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.  
·         User acceptance testing – determining if software is satisfactory to an end-user or customer.  
·         Comparison testing – comparing software weaknesses and strengths to competing products. 
 ·         Alpha testing – testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers. 
 ·         Beta testing – testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
  ·         Mutation testing – a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (‘bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large

What is SEI? CMM? ISO? IEEE? ANSI? Will it help? ·    
     SEI = ‘Software Engineering Institute’ at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes. ·        CMM = ‘Capability Maturity Model’, developed by the SEI. It’s a model of 5 levels of organizational ‘maturity’ that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMM ratings by undergoing assessments by qualified auditors.  Level 1 –  Level 2 – Level 3 -       Level 4 -       Level 5 –    (Perspective on CMM ratings:  During 1992-1996 533 organizations were assessed.  Of those, 62% were rated at Level 1, 23% at 2,13% at 3, 2% at 4, and  0.4% at 5.  The median size of organizations was 100 software engineering/maintenance personnel; 31% of organizations were U.S. federal contractors.  For those rated at Level 1, the most problematical key process area was in Software Quality Assurance.) ·         ISO = ‘International Organization for Standards’ – The ISO 9001, 9002, and 9003 standards concern quality systems that are assessed by outside auditors, and they apply to many kinds of production and manufacturing organizations, not just software. The most comprehensive is 9001, and this is the one most often used by software development organizations. It covers documentation, design, development, production, testing, installation, servicing, and other processes. ISO 9000-3 (not the same as 9003) is a guideline for applying ISO 9001 to software development organizations. The U.S. version of the ISO 9000 series standards is exactly the same as the international version, and is called the ANSI/ASQ Q9000 series. The U.S. version can be purchased directly from the ASQ (American Society for Quality) or the ANSI organizations. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO 9000 certification does not necessarily indicate quality products – it indicates only that documented processes are followed. (Publication of revised ISO standards are expected in late 2000; see http://www.iso.ch/ for latest info.)  ·         IEEE = ‘Institute of Electrical and Electronics Engineers’ – among other things, creates standards such as ‘IEEE Standard for Software Test Documentation’ (IEEE/ANSI Standard 829), ‘IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), ‘IEEE Standard for Software Quality Assurance Plans’ (IEEE/ANSI Standard 730), and others.  ·         ANSI = ‘American National Standards Institute’, the primary industrial standards body in the U.S.; publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).  
What is the ‘software life cycle’? 
The life cycle begins when anapplication is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.  


What makes a good test engineer?

A good test engineer has a ‘test to break’ attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers’ point of view, and reduce the learning curve in automated test tool programming. Judgement skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited. 

 What makes a good Software QA engineer?

 The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see ‘what’s missing’ is important for inspections and reviews.  

 What’s the role of documentation in QA?

Critical. (Note that documentation can be electronic, not necessarily paper.) QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information. Change management for documentation should be used if possible.

 What’s the big deal about ‘requirements’?

One of the most reliable methods of insuring problems, or failure, in a complex software project is to have poorly documented requirements specifications. Requirements are the details describing an application’s externally-perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, ‘user-friendly’ (too subjective). A testable requirement would be something like ‘the user must enter their previously-assigned password to access the application’. Determining and organizing requirements details in a useful and efficient way can be a difficult effort; different methods are available depending on the particular project. Many books are available that describe various approaches to this task. (See the Bookstores section’s ‘Software Requirements Engineering’ category for books on Software Requirements.)Care should be taken to involve ALL of a project’s significant ‘customers’ in the requirements process. ‘Customers’ could be in-house personnel or out, and could include end-users, customer acceptance testers, customer contract officers, customer management, future software maintenance engineers, salespeople, etc. Anyone who could later derail the project if their expectations aren’t met should be included if possible.  Organizations vary considerably in their handling of requirements specifications. Ideally, the requirements are spelled out in a document with statements such as ‘The product shall…’ ‘Design’ specifications should not be confused with ‘requirements’; design specifications should be traceable back to the requirements. In some organizations requirements may end up in high level project plans, functional specification documents, in design documents, or in other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by testers in order to properly plan and execute tests. Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly.  




What’s a ‘test plan’?

A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the ‘why’ and ‘how’ of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project. 

What’s a ‘test case’?

 ·         A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results. ·         Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it’s useful to prepare test cases early in the development cycle if possible.

 What should be done after a bug is found?

The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn’t create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the ‘Tools’ section for web resources with listings of such tools). The following are items to consider in the tracking process: ·         Complete information such that developers can understand the bug, get an idea of it’s severity, and reproduce it if necessary. ·         Bug identifier (number, ID, etc.) ·         Current bug status (e.g., ‘Released for Retest’, ‘New’, etc.) ·         The application name or identifier and version ·         The function, module, feature, object, screen, etc. where the bug occurred ·         Environment specifics, system, platform, relevant hardware specifics ·         Test case name/number/identifier·         One-line bug description ·         Full bug description ·         Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn’t have easy access to the test case/test script/test tool·         Names and/or descriptions of file/data/messages/etc. used in test ·        File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful 0.
in finding the cause of the problem ·         Severity estimate (a 5-level range such as 1-5 or ‘critical’-to-’low’ is common) ·        Was the bug reproducible? ·         Tester name ·         Test date ·         Bug reporting date ·         Name of developer/group/organization the problem is assigned to·         Description of problem cause ·         Description of fix ·         Code section/file/module/class/method that was fixed ·         Date of fix ·        Application version that contains the fix ·         Tester responsible for retest·         Retest date ·         Retest results ·         Regression testing requirements ·        Tester responsible for regression tests ·         Regression testing results A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.  

What is ‘configuration management’?

Configuration management covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes. (See the ‘Tools’ section for web resources with listings of configuration management tools. Also see theBookstores section’s ‘Configuration Management’ category for useful books with more information.)  


What if the software is so buggy it can’t really be tested at all?

 The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem. 

 How can it be known when to stop testing?

This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:·         Deadlines (release deadlines, testing deadlines, etc.) ·         Test cases completed with certain percentage passed ·         Test budget depleted ·        Coverage of code/functionality/requirements reaches a specified point ·        Bug rate falls below a certain level ·         Beta or alpha testing period ends      

What if there isn’t enough time for thorough testing?

 Use risk analysis to determine where testing should be focused.Since it’s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:·         Which functionality is most important to the project’s intended purpose?·         Which functionality is most visible to the user? ·         Which functionality has the largest safety impact? ·         Which functionality has the largest financial impact on users? ·         Which aspects of the application are most important to the customer? ·         Which aspects of the application can be tested early in the development cycle? ·         Which parts of the code are most complex, and thus most subject to errors? ·         Which parts of the application were developed in rush or panic mode? ·         Which aspects of similar/related previous projects caused problems? ·         Which aspects of similar/related previous projects had large maintenance expenses? ·         Which parts of the requirements and design are unclear or poorly thought out? ·        What do the developers think are the highest-risk aspects of the application? ·         What kinds of problems would cause the worst publicity?·         What kinds of problems would cause the most customer service complaints? ·         What kinds of tests could easily cover multiple functionalities? ·         Which tests will have the best high-risk-coverage to time-required ratio? ·          

What if the project isn’t big enough to justify extensive testing? 

Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the same considerations as described previously in 

‘What if there isn’t enough time for thorough testing?’ apply. The tester might then do ad hoc testing, or write up a limited test plan based on the risk analysis. 

 What can be done if requirements are changing continuously? 

A common problem and a major headache. ·         Work with the project’s stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible. ·         It’s helpful if the application’s initial design allows for some adaptability so that later changes do not require redoing the application from scratch. ·         If the code is well-commented and well-documented this makes changes easier for the developers. ·         Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes. ·         The project’s initial schedule should allow for some extra time commensurate with the possibility of changes. ·         Try to move new requirements to a ‘Phase 2′ version of an application, while using the original requirements for the ‘Phase 1′ version. ·         Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application. ·         Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted – after all, that’s their job. ·         Balance the effort put into setting up automated testing with the expected effort required to re-do them to deal with changes. ·         Try to design some flexibility into automated test scripts. ·         Focus initial automated testing on application aspects that are most likely to remain unchanged. ·         Devote appropriate effort to risk analysis of changes to minimize regression testing needs. ·        Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans) ·         Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added risk that this entails). 

What if the application has functionality that wasn’t in the requirements? 

It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it would indicate deeper problems in the software development process. If the functionality isn’t necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only effects areas such as minor improvements in the user interface, for example, it may not be a significant risk. 

 How can Software QA processes be implemented without stifling productivity?

By implementing QA processes slowly over time, using consensus to reach agreement on processes, and adjusting and experimenting as an organization grows and matures, productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection, panics and burn-out will decrease, and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings, and promote training as part of the QA process. However, no one – especially talented technical types – likes rules or bureacracy, and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug-fixing and calming of irate customers. 

What if an organization is growing so fast that fixed QA processes are impossible?
This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than: ·         Hire good people ·         Management should ‘ruthlessly prioritize’ quality issues and maintain focus on the customer ·        Everyone in the organization should be clear on what ‘quality’ means to the customer 

 How does a client/server environment affect testing?

Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing.  

How can World Wide Web sites be tested?

Web sites are essentially client/server applications – with web servers and ‘browser’ clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include: ·        What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)? ·         Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)? ·         What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)? ·         Will down time for server and content maintenance/upgrades be allowed? how much? ·         What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested? ·         How reliable are the site’s Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing? ·         What processes will be required to manage updates to the web site’s content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.? ·         Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers? ·         Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site?? ·         How will internal and external links be validated and updated? how often? ·         Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet ‘traffic congestion’ problems to be accounted for in testing? ·         How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing? ·         How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested? Some sources of site security information include the Usenet newsgroup ‘comp.security.announce’ and links concerning web site security in the ‘Other Resources’ section. Some usability guidelines to consider – these are subjective and may or may not apply to a given situation (Note: more information on usability testing issues can be found in articles about web site usability in the ‘Other Resources’ section): ·         Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page.·         The page layouts and design elements should be consistent throughout a site, so that it’s clear to the user that they’re still within a site. ·         Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type. ·         All pages should have links external to the page; there should be no dead-end pages. ·         The page owner, revision date, and a link to a contact person or organization should be included on each page. Many new web site test tools are appearing and more than 180 of them are listed in the ‘Web Test Tools’ section. 

 How is testing affected by object-oriented designs? 

Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application’s objects. If the application was well-designed this can simplify test design.  

What is Extreme Programming and what’s it got to do with testing? 

Extreme Programming (XP) is a software development approach for small teams on risk-prone projects with unstable requirements. It was created by Kent Beck who described the approach in his book ‘Extreme Programming Explained’ (See the Softwareqatest.com Books page.). Testing (‘extreme testing’) is a core aspect of Extreme Programming. Programmers are expected to write unit and functional test code first – before the application is developed. Test code is under source control along with the rest of the code. Customers are expected to be an integral part of the project team and to help develope scenarios for acceptance/black box testing. Acceptance tests are preferably automated, and are modified and rerun for each of the frequent development iterations. QA and test personnel are also required to be an integral part of the project team. Detailed requirements documentation is not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected. For more info see the XP-related listings in the Softwareqatest.com ‘Other Resources’ section.  Test Life Cycle 
·         Identify Test Candidates
·         Test Plan
·         Design Test Cases
·         Execute Tests
·         Evaluate Results
·         Document Test Results
·         Casual Analysis/ Preparation of Validation Reports
·         Regression Testing / Follow up on reported bugs.
  
Glass Box Testing
Test case selection that is based on an analysis of the internal structure of the component.Testing by looking only at the code. Sometimes also called “Code Based Testing”. Obviously you need to be a programmer and you need to have the source code to do this.  
Test Case
A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Operational Testing
Testing conducted to evaluate a system or component in its operational environment.
Validation
Determination of the correctness of the products of software development with respect to the user needs and requirements. 
Verification
The process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase.
Control Flow
An abstract representation of all possible sequences of events in a program’s execution. 
CAST
Acronym for computer-aided software testing.
Metrics
Ways to measure: e.g., time, cost, customer satisfaction, quality.


What makes a good test engineer?

A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming. Judgement skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited. 

What makes a good Software QA engineer? 

The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews. 

What makes a good QA or Test manager? 

A good QA, test, or QA/Test(combined) manager should: 
• be familiar with the software development process 
• be able to maintain enthusiasm of their team and promote a positive atmosphere, despite 
• what is a somewhat 'negative' process (e.g., looking for or preventing problems) 
• be able to promote teamwork to increase productivity 
• be able to promote cooperation between software, test, and QA engineers 
• have the diplomatic skills needed to promote improvements in QA processes 
• have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to 
• have people judgement skills for hiring and keeping skilled personnel 
• be able to communicate with technical and non-technical people, engineers, managers, and customers. 
• be able to run meetings and keep them focused 

What's the role of documentation in QA? 

Critical. (Note that documentation can be electronic, not necessarily paper.) QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information. Change management for documentation should be used if possible. 

What's the big deal about 'requirements'? 

One of the most reliable methods of insuring problems, or failure, in a complex software project is to have poorly documented requirements specifications. Requirements are the details describing an application's externally-perceived functionality and properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and testable. A non-testable requirement would be, for example, 'user-friendly' (too subjective). A testable requirement would be something like 'the user must enter their previously-assigned password to access the application'. Determining and organizing requirements details in a useful and efficient way can be a difficult effort; different methods are available depending on the particular project. Many books are available that describe various approaches to this task. (See the Bookstore section's 'Software Requirements Engineering' category for books on Software Requirements.) 
Care should be taken to involve ALL of a project's significant 'customers' in the requirements process. 'Customers' could be in-house personnel or out, and could include end-users, customer acceptance testers, customer contract officers, customer management, future software maintenance engineers, salespeople, etc. Anyone who could later derail the project if their expectations aren't met should be included if possible. 
Organizations vary considerably in their handling of requirements specifications. Ideally, the requirements are spelled out in a document with statements such as 'The product shall.....'. 'Design' specifications should not be confused with 'requirements'; design specifications should be traceable back to the requirements. 
In some organizations requirements may end up in high level project plans, functional specification documents, in design documents, or in other documents at various levels of detail. No matter what they are called, some type of documentation with detailed requirements will be needed by testers in order to properly plan and execute tests. Without such documentation, there will be no clear-cut way to determine if a software application is performing correctly. 
'Agile' methods such as XP use methods requiring close interaction and cooperation between programmers and customers/end-users to iteratively develop requirements. The programmer uses 'Test first' development to first create automated unit testing code, which essentially embodies the requirements. 

What steps are needed to develop and run software tests? 

The following are some of the steps to consider: 
• Obtain requirements, functional design, and internal design specifications and other necessary documents 
• Obtain budget and schedule requirements 
• Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.) 
• Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests 
• Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc. 
• Determine test environment requirements (hardware, software, communications, etc.) 
• Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.) 
• Determine test input data requirements 
• Identify tasks, those responsible for tasks, and labor requirements 
• Set schedule estimates, timelines, milestones 
• Determine input equivalence classes, boundary value analyses, error classes 
• Prepare test plan document and have needed reviews/approvals 
• Write test cases 
• Have needed reviews/inspections/approvals of test cases 
• Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data 
• Obtain and install software releases 
• Perform tests 
• Evaluate and report results 
• Track problems/bugs and fixes 
• Retest as needed 
• Maintain and update test plans, test cases, test environment, and testware through life cycle 

What's a 'test plan'? 
.
A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project: 
• Title 
• Identification of software including version/release numbers 
• Revision history of document including authors, dates, approvals 
 Table of Contents 
• Purpose of document, intended audience 
• Objective of testing effort 
• Software product overview 
• Relevant related document list, such as requirements, design documents, other test plans, etc. 
• Relevant standards or legal requirements 
• Traceability requirements 
• Relevant naming conventions and identifier conventions 
• Overall software project organization and personnel/contact-info/responsibilties 
• Test organization and personnel/contact-info/responsibilities 
• Assumptions and dependencies 
• Project risk analysis 
• Testing priorities and focus 
• Scope and limitations of testing 
• Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable 
• Outline of data input equivalence classes, boundary value analysis, error classes 
• Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems 
• Test environment validity analysis - differences between the test and production systems and their impact on test validity. 
• Test environment setup and configuration issues 
• Software migration processes 
• Software CM processes 
• Test data setup requirements 
• Database setup requirements 
• Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs 
• Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs 
• Test automation - justification and overview 
• Test tools to be used, including versions, patches, etc. 
• Test script/test code maintenance processes and version control 
• Problem tracking and resolution - tools and processes 
• Project test metrics to be used 
• Reporting requirements and testing deliverables 
• Software entrance and exit criteria 
• Initial sanity testing period and criteria 
• Test suspension and restart criteria 
• Personnel allocation 
• Personnel pre-training needs 
• Test site/location 
• Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues 
• Relevant proprietary, classified, security, and licensing issues. 
• Open issues 
• Appendix - glossary, acronyms, etc. 
(See the Bookstore section's 'Software Testing' and 'Software QA' categories for useful books with more information.) 

What's a 'test case'? 

• A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results. 
• Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible. 

What should be done after a bug is found? 

The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process: 
• Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary. 
• Bug identifier (number, ID, etc.) 
• Current bug status (e.g., 'Released for Retest', 'New', etc.) 
• The application name or identifier and version 
• The function, module, feature, object, screen, etc. where the bug occurred 
• Environment specifics, system, platform, relevant hardware specifics 
• Test case name/number/identifier 
• One-line bug description 
• Full bug description 
• Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool 
• Names and/or descriptions of file/data/messages/etc. used in test 
• File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem 
• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common) 
• Was the bug reproducible? 
• Tester name 
• Test date 
• Bug reporting date 
• Name of developer/group/organization the problem is assigned to 
• Description of problem cause 
• Description of fix 
• Code section/file/module/class/method that was fixed 
• Date of fix 
• Application version that contains the fix 
• Tester responsible for retest 
• Retest date 
• Retest results 
• Regression testing requirements 
• Tester responsible for regression tests 
• Regression testing results 
A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers. 

What is 'configuration management'? 

Configuration management covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes. (See the 'Tools' section for web resources with listings of configuration management tools. Also see the Bookstore section's 'Configuration Management' category for useful books with more information.) 

What if the software is so buggy it can't really be tested at all? 

The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem. 

How can it be known when to stop testing? 

This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are: 
• Deadlines (release deadlines, testing deadlines, etc.) 
• Test cases completed with certain percentage passed 
• Test budget depleted 
• Coverage of code/functionality/requirements reaches a specified point 
• Bug rate falls below a certain level 
• Beta or alpha testing period ends 

What if there isn't enough time for thorough testing? 

Use risk analysis to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include: 
• Which functionality is most important to the project's intended purpose? 
• Which functionality is most visible to the user? 
• Which functionality has the largest safety impact? 
• Which functionality has the largest financial impact on users? 
• Which aspects of the application are most important to the customer? 
• Which aspects of the application can be tested early in the development cycle? 
• Which parts of the code are most complex, and thus most subject to errors? 
• Which parts of the application were developed in rush or panic mode? 
• Which aspects of similar/related previous projects caused problems? 
• Which aspects of similar/related previous projects had large maintenance expenses? 
• Which parts of the requirements and design are unclear or poorly thought out? 
• What do the developers think are the highest-risk aspects of the application? 
• What kinds of problems would cause the worst publicity? 
• What kinds of problems would cause the most customer service complaints? 
• What kinds of tests could easily cover multiple functionalities? 
• Which tests will have the best high-risk-coverage to time-required ratio?

What if the project isn't big enough to justify extensive testing? 

Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the same considerations as described previously in 'What if there isn't enough time for thorough testing?' apply. The tester might then do ad hoc testing, or write up a limited test plan based on the risk analysis. 

What can be done if requirements are changing continuously? 

A common problem and a major headache. 
• Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible. 
• It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch. 
• If the code is well-commented and well-documented this makes changes easier for the developers. 
• Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes. 
• The project's initial schedule should allow for some extra time commensurate with the possibility of changes. 
• Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version. 
• Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application. 
• Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted - after all, that's their job. 
• Balance the effort put into setting up automated testing with the expected effort required to re-do them to deal with changes. 
• Try to design some flexibility into automated test scripts. 
• Focus initial automated testing on application aspects that are most likely to remain unchanged. 
• Devote appropriate effort to risk analysis of changes to minimize regression testing needs. 
• Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans) 
• Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added risk that this entails). 

What if the application has functionality that wasn't in the requirements? 

It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only effects areas such as minor improvements in the user interface, for example, it may not be a significant risk. 

How can Software QA processes be implemented without stifling productivity? 

By implementing QA processes slowly over time, using consensus to reach agreement on processes, and adjusting and experimenting as an organization grows and matures, productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection, panics and burn-out will decrease, and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings, and promote training as part of the QA process. However, no one - especially talented technical types - likes rules or bureacracy, and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug-fixing and calming of irate customers. 

What if an organization is growing so fast that fixed QA processes are impossible? 

This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than: 
• Hire good people 
• Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer 
• Everyone in the organization should be clear on what 'quality' means to the customer 

How does a client/server environment affect testing? 

Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing. (See the 'Tools' section for web resources with listings that include these kinds of test tools.) 

How can World Wide Web sites be tested? 

Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include: 
• What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)? 
• Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)? 
• What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)? 
• Will down time for server and content maintenance/upgrades be allowed? how much? 
• What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested? 
• How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing? 
• What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.? 
• Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers? 
• Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site?? 
• How will internal and external links be validated and updated? how often? 
• Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing? 
• How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing? 
• How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested? 
Some sources of site security information include the Usenet newsgroup 'comp.security.announce' and links concerning web site security in the 'Other Resources' section. 
Some usability guidelines to consider - these are subjective and may or may not apply to a given situation (Note: more information on usability testing issues can be found in articles about web site usability in the 'Other Resources' section): 
• Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page. 
• The page layouts and design elements should be consistent throughout a site, so that it's clear to the user that they're still within a site. 
• Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type. 
• All pages should have links external to the page; there should be no dead-end pages. 
• The page owner, revision date, and a link to a contact person or organization should be included on each page. 
Many new web site test tools have appeared in the recent years and more than 280 of them are listed in the 'Web Test Tools' section.

Read more: 
http://www.ittestpapers.com/manual-testing-interview-questions.html#ixzz1LGw0ugLB


When would you perform regression testing?

This depends on the requirements and the lifespan of a product, but there are many other factors that influence whether regression testing needs to be done.

An example would be a product that is at certain stage in it's life cycle, and there is a requirement for a new version to replace the old version.

In theory the ideal scenario would be that the new version of the software fully supports the legacy functions of the previous version and is able to add new functionality to the product without compromising the functional integrity of the overall product.

With that in mind, this is where regression testing would be done, as you would now want to ascertain whether the new version of the product does not add new defects into the system.


What’s the difference between functional testing, system test, and UAT?

Functional Testing:- testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. this is a black box testing.
System Testing:- System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an application's accuracy and completeness in performing the functions as designed. System testing simulates real life scenarios that occur in a "simulated real life" test environment and test all functions of the system that are required in real life. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input. 
UAT:- Acceptance testing is black box testing that gives the client/ customer/ project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager, however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria. 
When would you perform regression testing?

Regression testing is verifing that previously passed tests are still OK after any change to the software or the environment, usually to verify that a change in one area doesn't affect other or unrelated areas.

What would you base your test cases on?

A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A test case should contain particulars such as a... 
· Test case identifier; 
· Test case name; 
· Objective; 
· Test conditions/setup; 
· Input data requirements/steps, and 
· Expected results. 
Test cases will be prepared by the tester based on BRD & FS.
How do you make sure the results are as expected?

In Functional or System testing we will test with real time data. and realtime scenarious with client approved test cases, so that we will know what is correct result.

What’s more important: Positive or Negative testing?

Both are important, but most of the test cases will be returned for Positive, for some applications Negetive cases also important.

When you realize the load you have cannot be done in the time given, how would you handle?

Use risk analysis to determine where testing should be focused.based on some consideration we can come up with task of completing complete testing.some are 
Which functionality is most important to the project's intended purpose? 
Which functionality is most visible to the user? 
Which functionality has the largest safety impact? 
Which functionality has the largest financial impact on users? 
Which aspects of the application are most important to the customer? 
Which aspects of the application can be tested early in the development cycle? 
Which parts of the code are most complex, and thus most subject to errors? 
Which parts of the application were developed in rush or panic mode? 
Which aspects of similar/related previous projects caused problems? 
Which aspects of similar/related previous projects had large maintenance expenses? 
Which parts of the requirements and design are unclear or poorly thought out? 
What do the developers think are the highest-risk aspects of the application? 
What kinds of problems would cause the worst publicity? 
What kinds of problems would cause the most customer service complaints? 
What kinds of tests could easily cover multiple functionalities? 
Which tests will have the best high-risk-coverage to time-required ratio? 


I am giving this answers with my experience. if u feel anything wrong. please excuse..

bye Devi

 How you will know when to stop testing?

a:Testing will be stopped when we came to know that there are only some minnor bugs which may not effect the functionality of the application and when all the test cases has benn executed sucessfully.

 What are the metrics generally you use in testing?

A:These software metrics will be taken care by SQA team
Ex:rate of deffect efficiency

 What is ECP and how you will prepare test cases?

A:It is a software testing related technique which is used for writing test cases.it will break the range into some wqual partitions.the main purpose of this tecnique is 
1) To reduce the no. of test cases to a necessary minimun.
2) To select the right test cases to cover all the senarios.

 Test Plan contents? Who will use this doc?

A:Test plan is a document which contains scope,risk analysis.
for every sucess there should be some plan,like that for geting some quality product proper test plan should be there.

The test plan Contents are:
1) introduction
a) Overview
b) Achronyms
2) Risk Analysis
3) Test items
4) Features and funtions to be test
5) Features and funtions not to be test
6) Test statergy
7) test environment
8) system test shedule
9) test delivarables
10) Resources
11) re-sumptiom and suspension criteria
12) staff and training

 What are Test case preparation guidelines?

A:Requirement specifications and User interface documents(sreen shots of application)

How u will do usability testing explain with example?

A:Mainly to check the look and feel,ease of use,gui(colours,fonts,allignment),help manuals and complete end to end navigation.

What is Functionality testing?

A:Here in this testing mainly we will check the functionality of the application whether its meet the customer requirements are not.
Ex:1+1 =2.

 Which SDLC you are using?

A:V model

 Explain V & V model?

A:Verification and Validation Model.

What are the acceptance criteria for your project?

A:It will be specified by the customer what is his acceptance criteria.Ex:if so and so functionality has worked enough for me.

Who will provide the LOC to u?

A:Loc (lines of code) it will depend on the any company standards they are following.

How u will report the bugs?

A:by using Bug tracking tool like Bugzilla,test director again it may depends on company,some companies they may use own tool.

Explain your organizations testing process?

A: 1) SRS
2) Planning
3) Test senario design
4) Test case design
5) Execution
6) Bug Reporting
7) maintainance

. What is bidirectional traceability?

Bidirectional traceability needs to be implemented both forward and backward (i.e., from requirements to end products and from end product back to requirements). 
When the requirements are managed well, traceability can be established from the source requirement to its lower level requirements and from the lower level requirements back to their source. Such bidirectional traceability helps determine that all source requirements have been completely addressed and that all lower level requirements can be traced to a valid source.

What is stub? Explain in testing point of view?

Stub is a dummy program or component, the code is not ready for testing, it's used for testing...that means, in a project if there are 4 modules and last is remaining and there is no time then we will use dummy program to complete that fourth module and we will run whole 4 modules also. The dummy program is also known as stub.

For Web Applications what type of tests are you going to do?

Web-based applications present new challenges, these challenges include:
- Short release cycles;
- Constantly Changing Technology;
- Possible huge number of users during initial website launch;
- Inability to control the user's running environment;
- 24-hour availability of the web site.


The quality of a website must be evident from the Onset. Any difficulty whether in response time, accuracy of information, or ease of use-will compel the user to click to a competitor's site. Such problems translate into lost of users, lost sales, and poor company image.


To overcome these types of problems, use the following techniques: 
1. Functionality Testing
Functionality testing involves making Sure the features that most affect user interactions work properly. These include: 
· forms
· searches
· pop-up windows
· shopping carts
· online payments
2. Usability Testing
Many users have low tolerance for anything that is difficult to use or that does not work. A user's first impression of the site is important, and many websites have become cluttered with an increasing number of features. For general-use websites frustrated users can easily click over a competitor's site.

Usability testing involves following main steps
· identify the website's purpose; 
· identify the indented users ;
· define tests and conduct the usability testing
· analyze the acquired information

3. Navigation Testing
Good Navigation is an essential part of a website, especially those that are complex and provide a lot of information. Assessing navigation is a major part of usability Testing.

4. Forms Testing
Websites that use forms need tests to ensure that each field works properly and that the forms posts all data as intended by the designer.

5. Page Content Testing
Each web page must be tested for correct content from the user perspective for correct content from the user perspective. These tests fall into two categories: ensuring that each component functions correctly and ensuring that the content of each is correct.

6. Configuration and Compatibility testing
A key challenge for web applications is ensuring that the user sees a web page as the designer intended. The user can select different browser software and browser options, use different network software and on-line service, and run other concurrent applications. We execute the application under every browser/platform combination to ensure the web sites work properly under various environments.

7. Reliability and Availability Testing
A key requirement o a website is that it Be available whenever the user requests it, after 24-hours a day, every day. The number of users accessing web site simultaneously may also affect the site's availability.

8. Performance Testing
Performance Testing, which evaluates System performance under normal and heavy usage, is crucial to success of any web application. A system that takes for long to respond may frustrate the user who can then quickly move to a competitor's site. Given enough time, every page request will eventually be delivered. Performance testing seeks to ensure that the website server responds to browser requests within defined parameters.

9. Load Testing
The purpose of Load testing is to model real world experiences, typically by generating many simultaneous users accessing the website. We use automation tools to increases the ability to conduct a valid load test, because it emulates thousand of users by sending simultaneous requests to the application or the server.

10. Stress Testing
Stress Testing consists of subjecting the system to varying and maximum loads to evaluate the resulting performance. We use automated test tools to simulate loads on website and execute the tests continuously for several hours or days.

11. Security Testing
Security is a primary concern when communicating and conducting business- especially sensitive and business- critical transactions - over the internet. The user wants assurance that personal and financial information is secure. Finding the vulnerabilities in an application that would grant an unauthorized user access to the system is important.

Define Brain Stromming and Cause Effect Graphing?

BS:
A learning technique involving open group discussion intended to expand the range of available ideas
OR
A meeting to generate creative ideas. At PEPSI Advertising, daily, weekly and bi-monthly brainstorming sessions are held by various work groups within the firm. Our monthly I-Power brainstorming meeting is attended by the entire agency staff.
OR
Brainstorming is a highly structured process to help generate ideas. It is based on the principle that you cannot generate and evaluate ideas at the same time. To use brainstorming, you must first gain agreement from the group to try brainstorming for a fixed interval (eg six minutes). 

CEG:
A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.

What is the maximum length of the test case we can write?

We can't say exactly test case length, it depending on functionality.


Password is having 6 digit alphanumeric then what are the possible input conditions?

Including special characters also Possible input conditions are:
1) Input password as = 6abcde (ie number first)
2) Input password as = abcde8 (ie character first)
3) Input password as = 123456 (all numbers)
4) Input password as = abcdef (all characters)
5) Input password less than 6 digit
6) Input password greater than 6 digits
7) Input password as special characters
8) Input password in CAPITAL ie uppercase
9) Input password including space
10) (SPACE) followed by alphabets /numerical /alphanumerical/

What is internationalization Testing?

Software Internationalization is process of developing software products independent from cultural norms, language or other specific attributes of a market

If I give some thousand tests to execute in 2 days what do you do?

If possible, we will automate or else, execute only the test cases which are mandatory.

What does black-box testing mean at the unit, integration, and system levels?

Tests for each software requirement using
Equivalence Class Partitioning, Boundary Value Testing, and more
Test cases for system software requirements using the Trace Matrix, Cross-functional Testing, Decision Tables, and more
Test cases for system integration for configurations, manual operations, etc.


What is agile testing?

Agile testing is used whenever customer requirements are changing dynamically

If we have no SRS, BRS but we have test cases does you execute the test cases blindly or do you follow any other process.

Test case would have detail steps of what the application is supposed to do. 
1) Functionality of application. 
2) In addition you can refer to Backend, is mean look into the Database. To gain more knowledge of the application.

What is Bug life cycle?
New: when tester reports a defect
Open: when developer accepts that it is a bug or if the developer rejects the defect, then the status is turned into "Rejected"
Fixed: when developer make changes to the code to rectify the bug...
Closed/Reopen: when tester tests it again. If the expected result shown up, it is turned into "Closed" and if the problem persists again, it's "Reopen".

What is deferred status in defect life cycle?

Deferred status means the developer accepted the bug, but it is scheduled to rectify in the next build.

Smoke test? Do you use any automation tool for smoke testing?
Testing the application whether it's performing its basic functionality properly or not, so that the test team can go ahead with the application. Definitely can use.

Verification and validation?
Verification is static. No code is executed. Say, analysis of requirements etc.
Validation is dynamic. Code is executed with scenarios present in test cases.

When a bug is found, what is the first action?
Report it in bug tracking tool.


What is test plan and explain its contents?
Test plan is a document which contains the scope for testing the application and what to be tested, when to be tested and who to test.

Advantages of automation over manual testing?
Time saving, resource and money

What is mean by release notes?
It's a document released along with the product which explains about the product. It also contains about the bugs that are in deferred status.
.
What is Testing environment in your company, means how testing process start?
Testing process is going as follows:
Quality assurance unit
Quality assurance manager
Test lead
Test engineer

Give an example of high priority and low severity, low priority and high severity?
Severity level:
The degree of impact the issue or problem has on the project. Severity 1 usually means the highest level requiring immediate attention. Severity 5 usually represents a documentation defect of minimal impact.
Severity is levels:
  • Critical: the software will not run
  • High: unexpected fatal errors (includes crashes and data corruption)
  • Medium: a feature is malfunctioning
  • Low: a cosmetic issue

Severity levels
  1. Bug causes system crash or data loss.
  2. Bug causes major functionality or other severe problems; product crashes in obscure cases.
  3. Bug causes minor functionality problems, may affect "fit anf finish".
  4. Bug contains typos, unclear wording or error messages in low visibility fields.

Severity levels
  • High: A major issue where a large piece of functionality or major system component is completely broken. There is no workaround and testing cannot continue.
  • Medium: A major issue where a large piece of functionality or major system component is not working properly. There is a workaround, however, and testing can continue.
  • Low: A minor issue that imposes some loss of functionality, but for which there is an acceptable and easily reproducible workaround. Testing can proceed without interruption.

Severity and Priority
Priority is Relative: the priority might change over time. Perhaps a bug initially deemed P1 becomes rated as P2 or even a P3 as the schedule draws closer to the release and as the test team finds even more heinous errors. Priority is a subjective evaluation of how important an issue is, given other tasks in the queue and the current schedule. It’s relative. It shifts over time. And it’s a business decision.
Severity is an absolute: it’s an assessment of the impact of the bug without regard to other work in the queue or the current schedule. The only reason severity should change is if we have new information that causes us to re-evaluate our assessment. If it was a high severity issue when I entered it, it’s still a high severity issue when it’s deferred to the next release. The severity hasn’t changed just because we’ve run out of time. The priority changed.

Severity Levels can be defined as follow:
S1 - Urgent/Showstopper. Like system crash or error message forcing to close the window.
Tester's ability to operate the system either totally (System Down), or almost totally, affected. A major area of the users system is affected by the incident and it is significant to business processes.

S2 - Medium/Workaround. Exist like when a problem is required in the specs but tester can go on with testing. Incident affects an area of functionality but there is a work-around which negates impact to business process. This is a problem that:
a) Affects a more isolated piece of functionality.
b) Occurs only at certain boundary conditions.
c) Has a workaround (where "don't do that" might be an acceptable answer to the user).
d) Occurs only at one or two customers. or is intermittent

S3 - Low. This is for minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in
layout/formatting. Problems do not impact use of the product in any substantive way. These are incidents that are cosmetic in nature and of no or very low impact to business processes.

What is Use case?
A simple flow between the end user and the system. It contains pre conditions, post conditions, normal flows and exceptions. It is done by Team Lead/Test Lead/Tester.

Diff. between STLC and SDLC?

STLC is software test life cycle it starts with
  • Preparing the test strategy.
  • Preparing the test plan.
  • Creating the test environment.
  • Writing the test cases.
  • Creating test scripts.
  • Executing the test scripts.
  • Analyzing the results and reporting the bugs.
  • Doing regression testing.
  • Test exiting.
SDLC is software or system development life cycle, phases are...
·         Project initiation.
·         Requirement gathering and documenting.
·         Designing.
·         Coding and unit testing.
·         Integration testing.
·         System testing.
·         Installation and acceptance testing. " Support or maintenance.
·         How you are breaking down the project among team members?
·         It can be depend on these following cases----
1) Number of modules
2) Number of team members
3) Complexity of the Project
4) Time Duration of the project
5) Team member's experience etc......
·          
·         What is Test Data Collection?
·         Test data is the collection of input data taken for testing the application. Various types and size of input data will be taken for testing the applications. Sometimes in critical application the test data collection will be given by the client also.
·          
What is Test Server?
The place where the developers put their development modules, which are accessed by the testers to test the functionality.

What are non-functional requirements?
The non-functional requirements of a software product are: reliability, usability, efficiency, delivery time, software development environment, security requirements, standards to be followed etc.

What are the differences between these three words Error, Defect and Bug?

Error: The deviation from the required logic, syntax or standards/ethics is called as error.

There are three types of error. They are:
Syntax error (This is due to deviation from the syntax of the language what supposed to follow). 
Logical error (This is due to deviation from the logic of the program what supposed to follow) 
Execution error (This is generally happens when you are executing the same program, that time you get it.) 
Defect: When an error found by the test engineer (testing department) then it is called defect

Bug: if the defect is agreed by the developer then it converts into bug, which has to fix by the developer or post pond to next versio

Why we perform stress-testing, resolution-testing and cross- browser testing?
Stress Testing: - We need to check the performance of the application. 
Def: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements 
Resolution Testing: - Some times developer created only for 1024 resolution, the same page displayed a horizontal scroll bar in 800 x 600 resolutions. No body can like the horizontal scroll appears in the screen. That is reason to test the Resolution testing.

Cross-browser Testing: - This testing some times called compatibility testing. When we develop the pages in IE compatible, the same page is not working in Fairfox or Netscape properly, because 
most of the scripts are not supporting to other than IE. So that we need to test the cross-browser Testing

There are two sand clocks(timers) one complete totally in 7 minutes and other in 9-minutes we have to calculate with this timers and bang the bell after completion of 11- minutes!plz give me the solution.
1. Start both clocks 
2. When 7 min clock complete, turn it so that it restarts.
3. When 9 min clock finish, turn 7 min clocks (It has 2 mints only).
4. When 7 min clock finishes, 11 min complete.

What is the minimum criteria for white box?

We should know the logic, code and the structure of the program or function. Internal knowledge of the application how the system works what's the logic behind it and structure how it should react to particular action.

What are the technical reviews?

For each document, it should be reviewed. Technical Review in the sense, for each screen, developer will write a Technical Specification. It should be reviewed by developer and tester. There are functional specification review, unit test case review and code review etc.



CSTE software testing certification exam question pattern


1. Define the following along with examples [25 Marks]
a. Boundary Value testing
b. Equivalence testing
c. Error Guessing
d. Desk checking
e. Control Flow analysis
Answer:
a) Boundary value Analysis: – A process of selecting test cases/data by identifying the boundaries that separate valid and invalid conditions. Tests are constructed to test the inside and outside edges of these boundaries, in addition to the actual boundary points. or A selection technique in which test data are chosen to lie along “boundaries” of the input domain [or output range] classes, data structures, procedure parameters, etc. Choices often include maximum, minimum, and trivial values or parameters.
E.g. – Input data 1 to 10 (boundary value)
Test input data 0, 1, 2 to 9, 10, 11
b) Equivalence testing: – The input domain of the system is partitioned into classes of representative values, so that the no of test cases can be limited to one-per-class, which represents the minimum no. of test cases that must be executed.
E.g.- valid data range: 1-10
Test set:-2; 5; 14
c) Error guessing: – Test data selection technique. The selection criterion is to pick values that seem likely to cause errors Error guessing is based mostly upon experience, with some assistance from other techniques such as boundary value analysis. Based on experience, the test designer guesses the types of errors that could occur in a particular type of software and designs test cases to uncover them.
E.g. – For example, if any type of resource is allocated dynamically, a good place to look for errors is in the de-allocation of resources. Are all resources correctly deallocated, or are some lost as the software executes?
d) Desk checking: – Desk checking is conducted by the developer of the system or program. The process involves reviewing the complete product to ensure that it is structurally sound and that the standards and requirements have been met. This is the most traditional means for analyzing a system or program.
e) Control Flow Analysis: – It is based upon graphical representation of the program process. In control flow analysis; the program graphs has nodes which represent a statement or segment possibly ending in an unresolved branch. The graph illustrates the flow of program control from one segment to another as illustrated through branches .the objective of control flow analysis is to determine the potential problems in logic branches that might result in a loop condition or improper processing.
2. You find that there is a senior tester who is making more mistakes than the junior testers. You need to communicate this aspect to the senior tester. Also, you don’t want to loose this tester. How should one go about the constructive criticism? [10 Marks]

Answer:
In the quality approach, it is the responsibility of the supervisor to make His/Her subordinates successful. The effective use of criticism is a tool for improving subordinate performance.
In giving constructive criticism, you should incorporate the following tactics: -
  • Do it privately.
  • Have the facts.
  • Be prepared to help the worker improve His/Her performance.
  • Be specific on Expectations.
  • Follow a specific process in giving the criticism.

3. Your manager has taken you onboard as a test lead for testing a web-based application. He wants to know what risks you would include in the Test plan. Explain each risk factor that would be a part of your test plan. [20 marks]

Answer:
Web-Based Application primary risk factors:-
A) Security: anything related to the security of the application.
B) Performance:- The amount of computing resources and code required by the system to perform its stated functions.
C) Correctness:-Data entered, processed, and outputted in the system is accurate and complete
D) Access Control:-Assurance that the application system resources will be protected
E) Continuity of processing:-The ability to sustain processing in the event problem occurs
F) Audit Trail:-The capability to substantiate the processing that has occurred.
G) Authorization:-Assurance that the data is processed in accordance with the intents of the management.
General risk or secondary risk’s:-
A)   Complex – anything disproportionately large, intricate or convoluted.
B) New – anything that has no history in the product.
C) Changed – anything that has been tampered with or “improved”.
D) Upstream Dependency – anything whose failure will cause cascading failure in the rest of the system.
E) Downstream Dependency – anything that is especially sensitive to failures in the rest of the system.
F) Critical – anything whose failure could cause substantial damage.
G) Precise – anything that must meet its requirements exactly.
H) Popular – anything that will be used a lot.
I) Strategic – anything that has special importance to your business, such as a feature that sets you apart from the competition.
J) Third-party – anything used in the product, but developed outside the project.
K) Distributed – anything spread out in time or space, yet whose elements must work together.
l) Buggy – anything known to have a lot of problems.
M) Recent Failure – anything with a recent history of failure.

5.   You are in the contract stage of a project and are developing a comprehensive proposal for a safety critical software system. Your director has consulted you for preparing a guideline document what will enlist user’s role during acceptance testing phase. Indicate the key roles you feel that the user should play during acceptance stage. Also indicate the categories into which the acceptance requirements should fall. [10 Marks]

Answer:
1) Ensure user involvement in developing systems requirement and acceptance criteria.
2) Identify interim and final products for acceptance their acceptance criteria and schedule.
3) Plan how and by whom each acceptance activity will be performed.
4) Plan resources for providing information.
5) Schedule adequate time for buyer staff to receive and examine the products and evaluation prior to acceptance review.
6) Prepare the acceptance plan.
7) Respond to the analysis of project entitles before accepting and rejecting.
8 ) Approve the various interim software products.
9) Perform the final acceptance activities, including the formal acceptance testing at delivery.
10) Make an acceptance decision for each product.

6.   What is parallel testing and when do we use parallel testing? Explain with
example? 
[5 marks]

Answer:
Testing a new or an altered data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison. OR we can say that parallel testing requires the same input data be run through two versions of the same application.
Parallel testing should be used when there is uncertainty regarding the correctness of processing of the new application. And old and new versions of the applications are same.
E.g.-
1) Operate the old and new version of the payroll system to determine that the paychecks from both systems are reconcilable.
2) Run the old version of the application system to ensure that the operational status of the old system has been maintained in the event that problems are encountered in the new application.

7.    What is the difference between testing Techniques and tools? Give examples. [5 marks]

Answer:
Testing technique: – Is a process for ensuring that some aspects of the application system or unit functions properly there may be few techniques but many tools.
Tools: – Is a vehicle for performing a test process. The tool is a resource to the tester, but itself is insufficient to conduct testing
E.g.:- The swinging of hammer to drive the nail. The hammer is a tool, and swinging the hammer is a technique. The concept of tools and technique is important in the testing process. It is a combination of the two that enables the test process to be performed. The tester should first understand the testing techniques and then understand the tools that can be used with each of the technique.
7. Quality control activities are focused on identifying defects in the actual products produced; however your boss wants you to identify and define processes that would prevent defects. How would you explain to him to distinguish between QA and QC responsibilities? [10 Marks]

Answer:
Quality Assurance:
1) A planned and systematic set of activities necessary to provide adequate confidence that requirements are properly established and products or services conform to specified requirements
2) An activity that establishes and evaluates the processes to produce the products.
3) Helps establish processes.
4) Sets up measurements programs to evaluate processes.
5) Identifies weaknesses in processes and improves them.
6) QA is the responsibility of the entire team.
7) Prevents the introduction of issues or defects
Quality Control:
1) The process by which product quality is compared with applicable standards; and the action taken when nonconformance is detected.
2) An activity which verifies if the product meets pre-defined standards.
3) Implements the process.
4) Verifies if specific attribute(s) are in a specific product or service
5) Identifies defects for the primary purpose of correcting defects.
6) QC is the responsibility of the tester.
7) Detects, reports and corrects defects

8 ) Differentiate between Transaction flow modeling, Finite state modeling, Data flow modeling and Timing modeling? [10 Marks]

Answer:
Transaction Flow modeling: -The nodes represent the steps in transactions. The links
represent the logical connection between steps.
Finite state modeling:-The nodes represent the different user observable states of the software. The links represent the transitions that occur to move from state to state.
Data flow modeling:-The nodes represent the data objects. The links represent the transformations that occur to translate one data object to another.
Timing Modeling:-The nodes are Program Objects. The links are sequential connections between the program objects. The link weights are used to specify the required execution times as program executes.
9) List what you think are the two primary goals of testing
[5 Marks]

Answer:
1) Determine whether the system meets specifications (producer view)
2) determine whether the system meets business and user needs (Customer view)


1.   Verification is: 
a. Checking that we are building the right system
b. Checking that we are building the system right
c. Performed by an independent test team
d. Making sure that it is what the user really wants
                                 1)- b


2.   A regression test:
a. Will always be automated
b. Will help ensure unchanged areas of the software have not been affected
c. Will help ensure changed areas of the software have not been affected
d. Can only be run during user acceptance testing
2)- b


3.   If an expected result is not specified then: 
a. We cannot run the test
b. It may be difficult to repeat the test
c. It may be difficult to determine if the test has passed or failed
d. We cannot automate the user inputs
3)- c


4.   Which of the following could be a reason for a failure
1) Testing fault
2) Software fault
3) Design fault
4) Environment Fault
5) Documentation Fault4)- d


a.    2 is a valid reason; 1,3,4 & 5 are not
b. 1,2,3,4 are valid reasons; 5 is not
c. 1,2,3 are valid reasons; 4 & 5 are not
d. All of them are valid reasons for failure

5. Test are prioritized so that: 
a. You shorten the time required for testing
b. You do the best testing in the time available
c. You do more effective testing
d. You find more faults
5)- b

6. Which of the following is not a static testing technique 
a. Error guessing
b. Walkthrough
c. Data flow analysis
d. Inspections

6)- a

7. Which of the following statements about component testing is not true?
a. Component testing should be performed by development
b. Component testing is also know as isolation or module testing
c. Component testing should have completion criteria planned
d. Component testing does not involve regression testing

7)- d

8. During which test activity could faults be found most cost effectively? 
a. Execution
b. Design
c. Planning
d. Check Exit criteria completion

8 )- c

9. Which, in general, is the least required skill of a good tester?
a. Being diplomatic
b. Able to write software
c. Having good attention to detail
d. Able to be relied on

9) – b

10. The purpose of requirement phase is 
a. To freeze requirements
b. To understand user needs
c. To define the scope of testing
d. All of the above

10) – d


11. The process starting with the terminal modules is called -
a. Top-down integration
b. Bottom-up integration
c. None of the above
d. Module integration

11) -b

12. The inputs for developing a test plan are taken from 
a. Project plan
b. Business plan
c. Support plan
d. None of the above

12) – a

13. Function/Test matrix is a type of 
a. Interim Test report
b. Final test report
c. Project status report
d. Management report

13) – c

14. Defect Management process does not include
a. Defect prevention
b. Deliverable base-lining
c. Management reporting
d. None of the above
14) – b
15. What is the difference between testing software developed by contractor outside your country, versus testing software developed by a contractor within your country?
a. Does not meet people needs
b. Cultural difference
c. Loss of control over reallocation of resources
d. Relinquishments of control
15) – b

16. Software testing accounts to what percent of software development costs?
a. 10-20
b. 40-50
c. 70-80
d. 5-1016) – b





17. A reliable system will be one that:
a. Is unlikely to be completed on schedule
b. Is unlikely to cause a failure
c. Is likely to be fault-free
d. Is likely to be liked by the users
17) – b

18. How much testing is enough 
a. This question is impossible to answer
b. The answer depends on the risks for your industry, contract and special requirements
c. The answer depends on the maturity of your developers
d. The answer should be standardized for the software development industry
18) – b

19. Which of the following is not a characteristic for Testability? 
a. Operability
b. Observability
c. Simplicity
d. Robustness19) – d


20. Cyclomatic Complexity method comes under which testing method. 
a. White box
b. Black box
c. Green box
d. Yellow box20) – a


21. Which of these can be successfully tested using Loop Testing methodology? 
a. Simple Loops
b. Nested Loops
c. Concatenated Loops
d. All of the above
21) – d

22. To test a function, the programmer has to write a ______, which calls the function and passes it test data.
a. Stub
b. Driver
c. Proxy
d. None of the above22) – b





23. Equivalence partitioning is: 
a. A black box testing technique used only by developers
b. A black box testing technique than can only be used during system testing
c. A black box testing technique appropriate to all levels of testing
d. A white box testing technique appropriate for component testing
23) – c

24. When a new testing tool is purchased, it should be used first by: 
a. A small team to establish the best way to use the tool
b. Everyone who may eventually have some use for the tool
c. The independent testing team
d. The vendor contractor to write the initial scripts
24) – a

25. Inspections can find all the following except 
a. Variables not defined in the code
b. Spelling and grammar faults in the documents
c. Requirements that have been omitted from the design documents
d. How much of the code has been covered
25) – d



1. Methodologies adopted while performing Maintenance Testing:-
a) Breadth Test and Depth Test
b) Retesting
c) Confirmation Testing
d) Sanity Testing

Evaluating the options:
a) Option a: Breadth testing is a test suite that exercises the full functionality of a product but does not test features in detail. Depth testing is a test that exercises a feature of a product in full detail.
b) Option b: Retesting is part of regression
c) Option c: Confirmation testing is a synonym for retesting
d) Option d: Sanity testing does not include full functionality
Maintenance testing includes testing some features in detail (for e.g. environment) and for some features detail testing is not required. It’s a mix of both breadth and depth testing.
So, the answer is ‘A’

2. Which of the following is true about Formal Review or Inspection:-
i. Led by Trained Moderator (not the author).
ii. No Pre Meeting Preparations
iii. Formal Follow up process.
iv. Main Objective is to find defects
a) ii is true and i,iii,iv are false
b) i,iii,iv are true and ii is false
c) i,iii,iv are false and ii is true
d) iii is true and i,ii,iv are false
Evaluating the options:
Consider the first point (i). This is true, Inspection is led by trained moderator. Hence we can eliminate options (a) and (d). Now consider second point. In Inspection pre-meeting preparation is required. So this point is false. Look for option where (i) is true and (ii) is false.
The answer is ‘B’


3. The Phases of formal review process is mentioned below arrange them in the correct order.

i. Planning
ii. Review Meeting
iii. Rework
iv. Individual Preparations
v. Kick Off
vi. Follow Up
a) i,ii,iii,iv,v,vi
b) vi,i,ii,iii,iv,v
c) i,v,iv,ii,iii,vi
d) i,ii,iii,v,iv,vi
Evaluating the options:
Formal review process is ’Inspection’. Planning is foremost step. Hence we can eliminate option ’b’. Now we need to kickoff the process, so the second step will be Kick off. That’s it we found the answer. Its ’C’
The answer is ’C’
4. Consider the following state transition diagram of a two-speed hair dryer, which is operated by pressing its one button. The first press of the button turns it on to Speed 1, second press to Speed 2 and the third press turns it off.

Which of the following series of state transitions below will provide 0-switch coverage?
a. A,C,B
b. B,C,A
c. A,B,C
d. C,B,A
Evaluating the options:
In State transition testing a test is defined for each state transition. The coverage that is achieved by this testing is called 0-switch or branch coverage. 0-switch coverage is to execute each loop once (No repetition. We should start with initial state and go till end state. It does not test ‘sequence of two state transitions’). In this case the start state is ‘OFF’, and then press of the button turns it on to Speed 1 (i.e. A). Second press turns it on to Speed 2 (i.e. B) and the third press turns it off (i.e. C). Here we do not test the combinations like what if the start state is ‘Speed 1’ or ‘Speed 2’ etc.
An alternate way of solving this is check for the options where it starts with ‘OFF’ state. So we have options ‘a’ and ‘c’ to select from. As per the state diagram from ‘OFF’ state the dryer goes to ‘Speed 1’ and then to ‘Speed 2’. So our answer should start with ‘A’ and end with ‘C’.
The answer is ’C’

5. White Box Techniques are also called as :-
a) Structural Testing
b) Design Based Testing
c) Error Guessing Technique
d) Experience Based Technique
Evaluating the options:
I guess no evaluation is required here. It’s a straight answer. White box techniques are also called as Structural testing. (as it is done using code)
The answer is ‘A’

6. What is an equivalence partition (also known as an equivalence class)?
a) A set of test cases for testing classes of objects
b) An input or output range of values such that only one value in the range becomes a test case
c) An input or output range of values such that each value in the range becomes a test case
d) An input or output range of values such that every tenth value in the range becomes a test case.
Evaluating the options:
Let’s recall the definition of equivalence partition. It is grouping inputs into valid and invalid classes. Hence any one value from one particular class forms an input. For e.g. input a valid class contains values from 3-5, then any value between 3-5 is considered as an input. All values are supposed to yield same output. Hence one value in this range becomes a test case.
The answer is ‘B’

7. The Test Cases Derived from use cases 
a) Are most useful in uncovering defects in the process flows during real world use of the system
b) Are most useful in uncovering defects in the process flows during the testing use of the system
c) Are most useful in covering the defects in the process flows during real world use of the system
d) Are most useful in covering the defects at the Integration Level
Evaluating the options:
Please refer to Use case related topic in the foundation level guide “Use cases describe the “process flows” through a system based on its actual likely use” (actual likely use is nothing but the real world use of the system). Use cases are useful for uncovering defects. Hence we can eliminate options (c ) and (d). Use case uncovers defects in process flow during real world use of the system.
The answer is ‘A’
8. Exhaustive Testing is
a) Is impractical but possible
b) Is practically possible
c) Is impractical and impossible
d) Is always possible
Evaluating the options:
From the definition given in the syllabus, Exhaustive testing is impossible. But it is possible in trivial cases. Exhaustive testing is not always possible. So eliminate option ‘d’. It is not impossible also. So eliminate option ‘c’. But implementing is impractical. Hence we can conclude that exhaustive testing is impractical but possible
The answer is ‘A’

9. Which of the following is not a part of the Test Implementation and Execution Phase 
a) Creating test suites from the test cases
b) Executing test cases either manually or by using test execution tools
c) Comparing actual results
d) Designing the Tests
Evaluating the options:
Please take care of the word ‘not’ in the question. Test implementation does include Creating test suites, executing and comparing results. Hence eliminate options a, b and c. The only option left is ‘D’. Designing activities come before implementation.
The answer is ‘D’

10. Which of the following techniques is NOT a White box technique?
a) Statement Testing and coverage
b) Decision Testing and coverage
c) Condition Coverage
d) Boundary value analysis
Evaluating the options:
Please take care of the word ‘not’ in the question. We have to choose the one which is not a part of white box technique. Statement, decision, condition are the terms used in white box. So eliminate options a, b and c. Boundary value is part of black box.
The answer is ‘D’

11. A Project risk includes which of the following 
a) Organizational Factors
b) Poor Software characteristics
c) Error Prone software delivered.
d) Software that does not perform its intended functions
Evaluating the options:
a) Option a: Organizational factors can be part of project risk.
b) Option b: Poor software characteristics are part of software. Its not a risk
c) Option c: Error prone software delivered. Again it’s a part of software.
d) Option d: Software that does not perform its intended functions. Again it’s a part of software.
The answer is ‘A’

12. In a risk-based approach the risks identified may be used to :
i. Determine the test technique to be employed
ii. Determine the extent of testing to be carried out
iii. Prioritize testing in an attempt to find critical defects as early as possible.
iv. Determine the cost of the project
a) ii is True; i, iii, iv & v are False
b) i,ii,iii are true and iv is false
c) ii & iii are True; i, iv are False
d) ii, iii & iv are True; i is false
Evaluating the options:
a) Option a: Risks identified can be used to determine the test technique.
b) Option b: Risks can be used to determine the extent of testing required. For e.g. if there are P1 bugs in a software, then it is a risk to release it. Hence we can increase the testing cycle to reduce the risk
c) Option c: If risk areas are identified before hand, then we can prioritize testing to find defects asap.
d) Option d: Risk does not determine the cost of the project. It determines the impact on the project as a whole.
Check for the option where first 3 points are true. Its ‘B’
The answer is ‘B’

13. Which of the following is the task of a Tester?
i. Interaction with the Test Tool Vendor to identify best ways to leverage test tool on the project.
ii. Prepare and acquire Test Data
iii. Implement Tests on all test levels, execute and log the tests.
iv. Create the Test Specifications
a) i, ii, iii is true and iv is false
b) ii,iii,iv is true and i is false
c) i is true and ii,iii,iv are false
d) iii and iv is correct and i and ii are incorrect
Evaluating the options:
Not much explanation is needed in this case. As a tester, we do all the activities mentioned in options (ii), (iii) and (iv).
The answer is ‘B’



14. The Planning phase of a formal review includes the following :-
a) Explaining the objectives
b) Selecting the personnel, allocating roles.
c) Follow up
d) Individual Meeting preparations
Evaluating the options:
In this case, elimination will work best. Follow-up is not a planning activity. It’s a post task. Hence eliminate option ‘b’. Individual meeting preparation is an activity for individual. It’s not a planning activity. Hence eliminate option ‘d’. Now we are left with 2 options ‘a’ and ‘b’, read those 2-3 times. We can identify that option ‘b’ is most appropriate. Planning phase of formal review does include selecting personnel and allocation of roles. Explaining the objectives is not part of review process. (this is also written in the FL syllabus)
The answer is ‘B’

15. A Person who documents all the issues, problems and open points that were identified during a formal review.
a) Moderator.
b) Scribe
c) Author
d) Manager
Evaluating the options:
I hope there is not confusion here. The answer is scribe.
The answer is ‘B’

16. Who are the persons involved in a Formal Review :-
i. Manager
ii. Moderator
iii. Scribe / Recorder
iv. Assistant Manager
a) i,ii,iii,iv are true
b) i,ii,iii are true and iv is false.
c) ii,iii,iv are true and i is false.
d) i,iv are true and ii, iii are false.
Evaluating the options:
The question is regarding formal review, means Inspection. First we will try to identify the persons that we are familiar w.r.t Inspection. Manager, Moderator and Scribe are involved in Inspection. So now we have only first 2 options to select from. (other 2 options are eliminated). There is no assistant manager in Inspection.
The answer is ‘B’

17. Which of the following is a Key Characteristics of Walk Through
a) Scenario , Dry Run , Peer Group
b) Pre Meeting Preparations
c) Formal Follow Up Process
d) Includes Metrics
Evaluating the options:
Pre meeting preparation is part of Inspection. Also Walk through is not a formal process. Metrics are part of Inspection. Hence eliminating ‘b’, ‘c’ and ‘d’.
The answer is ‘A’

18. What can static analysis NOT find?
a) the use of a variable before it has been defined
b) unreachable (“dead”) code
c) memory leaks
d) array bound violations
Evaluating the options:
Static analysis cover all the above options except ‘Memory leaks’. (Please refer to the FL syllabus. Its written clearly over there)
The answer is ‘C’

19. Incidents would not be raised against:
a) requirements
b) documentation
c) test cases
d) improvements suggested by users
Evaluating the options:
The first three options are obvious options for which incidents are raised. The last option can be thought as an enhancement. It is a suggestion from the users and not an incident.
The answer is ‘D’

20. A Type of functional Testing, which investigates the functions relating to detection of threats, such as virus from malicious outsiders.
a) Security Testing
b) Recovery Testing
c) Performance Testing
d) Functionality Testing
Evaluating the options:
The terms used in the question like detection of threats, virus etc point towards the security issues. Also security testing is a part of Functional testing. In security testing we investigate the threats from malicious outsiders etc.
The answer is ‘A’



21. Which of the following is not a major task of Exit criteria?
a) Checking test logs against the exit criteria specified in test planning.
b) Logging the outcome of test execution.
c) Assessing if more tests are needed.
d) Writing a test summary report for stakeholders.
Evaluating the options:
The question is about ‘not’ a major task. Option ‘a’ is a major task. So eliminate this. Option ‘b’ is not a major task. (But yes, logging of outcome is important). Option ‘c’ and ‘d’ both are major tasks of Exit criteria. So eliminate these two.
The answer is ‘B’

22. Testing where in we subject the target of the test , to varying workloads to measure and evaluate the performance behaviors and ability of the target and of the test to continue to function properly under these different workloads.
a) Load Testing
b) Integration Testing
c) System Testing
d) Usability Testing
Evaluating the options:
Workloads, performance are terms that come under Load testing. Also as can be seen from the other options, they are not related to load testing. So we can eliminate them.
The answer is ‘A’

23. Testing activity which is performed to expose defects in the interfaces and in the interaction between integrated components is :-
a) System Level Testing
b) Integration Level Testing
c) Unit Level Testing
d) Component Testing
Evaluating the options:
We have to identify the testing activity which finds defects which occur due to interaction or integration. Option ‘a’ is not related to integration. Option ‘c’ is unit testing. Option ‘d’ component is again a synonym for unit testing. Hence eliminating these three options.
The answer is ‘B’

24. Static analysis is best described as:
a) The analysis of batch programs.
b) The reviewing of test plans.
c) The analysis of program code.
d) The use of black box testing.
Evaluating the options:
In this case we have to choose an option, which ‘best’ describes static analysis. Most of the options given here are very close to each other. We have to carefully read them.
a) Option a: Analysis is part of static analysis. But is not the best option which describes static analysis.
b) Option b: Reviews are part of static analysis. But is not the best option which describes static analysis.
c) Option c: Static analysis does analyze program code.
d) Option d: This option ca be ruled out, as black box is a dynamic testing.
The answer is ‘C’

25. One of the fields on a form contains a text box which accepts alpha numeric values. Identify the Valid Equivalence class
a) BOOK
b) Book
c) Boo01k
d) book
Evaluating the options:
As we know, alpha numeric is combination of alphabets and numbers. Hence we have to choose an option which has both of these.
a. Option a: contains only alphabets. (to create confusion they are given in capitals)
b. Option b: contains only alphabets. (the only difference from above option is that all letters are not in capitals)
c. Option c: contains both alphabets and numbers
d. Option d: contains only alphabets but in lower case
The answer is ‘C’

26. Reviewing the test Basis is a part of which phase 
a) Test Analysis and Design
b) Test Implementation and execution
c) Test Closure Activities
d) Evaluating exit criteria and reporting
Evaluating the options:
Test basis comprise of requirements, architecture, design, interfaces. By looking at these words, we can straight away eliminate last two options. Now option ‘a’ is about test analysis and design. This comes under test basis. Option ‘b’ is about implementation and execution which come after the design process. So the best option is ‘a’.
The answer is ‘A’




27. Reporting Discrepancies as incidents is a part of which phase :- 
a) Test Analysis and Design
b) Test Implementation and execution
c) Test Closure Activities
d) Evaluating exit criteria and reporting
Evaluating the options:
Incident is reporting discrepancies, in other terms its defect/bug. We find defects while execution cycle where we execute the test cases.

The answer is ‘B’

28. Which of the following items would not come under Configuration Management?
a) operating systems
b) test documentation
c) live data
d) user requirement document
Evaluating the options:
We have to choose an option which does ‘not’ come under Configuration Management (CM). CM is about maintaining the integrity of the products like components, data and documentation.
a) Option a: maintaining the Operating system configuration that has been used in the test cycle is part of CM.
b) Option b: Test documentation is part of CM
c) Option c: Data is part of CM. but here the option is ‘live data’ which is not part of CM. The live data keeps on changing (in real scenario).
d) Option d: Requirements and documents are again part of CM
The only option that does not fall under CM is ‘c’
The answer is ‘C’

29. Handover of Test-ware is a part of which Phase 
a) Test Analysis and Design
b) Test Planning and control
c) Test Closure Activities
d) Evaluating exit criteria and reporting
Evaluating the options:
Hand over is typically a process which is part of closure activities. It is not part of analysis, design or planning activity. Also it is not part of evaluating exit criteria. After closure of test cycle test-ware is handover to the maintenance organization.
The answer is ‘C’


30. The Switch is switched off once the temperature falls below 18 and then it is turned on when the temperature is more than 21. Identify the Equivalence values which belong to the same class.
a) 12,16,22
b) 24,27,17
c) 22,23,24
d) 14,15,19
Evaluating the options:
Read the question carefully. We have to choose values from same class. So first divide the classes. When temperature falls below 18 switch is turned off. This forms a class (as shown below). When the temperature is more than 21, the switch is turned on. For values between 18 to 21, no action is taken. This also forms a class as shown below.
Class I: less than 18 (switch turned off)
Class II: 18 to 21
Class III: above 21 (switch turned on)
From the given options select the option which has values from only one particular class. Option ‘a’ values are not in one class, so eliminate. Option ‘b’ values are not in one class, so eliminate. Option ‘c’ values are in one class. Option ‘d’ values are not in one class, so eliminate. (please note that the question does not talk about valid or invalid classes. It is only about values in same class)
The answer is ‘C’


1 : With thorough testing it is possible to remove all defects from a program prior to delivery to the customer.

a. True 
b. False

ANSWER : b

2 : Which of the following are characteristics of testable software ?

a. observability 
b. simplicity 
c. stability 
d. all of the above

ANSWER : d

3 : The testing technique that requires devising test cases to demonstrate that each program function is operational is called

a. black-box testing
b. glass-box testing
c. grey-box testing
d. white-box testing

ANSWER : a
 

4 : The testing technique that requires devising test cases to exercise the internal logic of a software module is called

a. behavioral testing
b. black-box testing
c. grey-box testing 
d. white-box testing

ANSWER : d

5 : What types of errors are missed by black-box testing and can be uncovered by white-box testing ?

a. behavioral errors
b. logic errors
c. performance errors
d. typographical errors
e. both b and d

ANSWER : e

6 : Program flow graphs are identical to program flowcharts.

a. True 
b. False

ANSWER : b

7 : The cyclomatic complexity metric provides the designer with information regarding the number of

a. cycles in the program
b. errors in the program
c. independent logic paths in the program
d. statements in the program

ANSWER : c

8 : The cyclomatic complexity of a program can be computed directly from a PDL representation of an algorithm without drawing a program flow graph.

a. True 
b. False

ANSWER : a

9 : Condition testing is a control structure testing technique where the criteria used to design test cases is that they

a. rely on basis path testing
b. exercise the logical conditions in a program module
c. select test paths based on the locations and uses of variables
d. focus on testing the validity of loop constructs

ANSWER : b

10 : Data flow testing is a control structure testing technique where the criteria used to design test cases is that they

a. rely on basis path testing
b. exercise the logical conditions in a program module
c. select test paths based on the locations and uses of variables
d. focus on testing the validity of loop constructs

ANSWER : c

11 : Loop testing is a control structure testing technique where the criteria used to design test cases is that they

a. rely basis path testing
b. exercise the logical conditions in a program module
c. select test paths based on the locations and uses of variables
d. focus on testing the validity of loop constructs

ANSWER : d

12 : Black-box testing attempts to find errors in which of the following categories

a. incorrect or missing functions
b. interface errors
c. performance errors
d. all of the above
e. none of the above

ANSWER : d
 

13 : Graph-based testing methods can only be used for object-oriented systems

a. True 
b. False

ANSWER : b

14 : Equivalence testing divides the input domain into classes of data from which test cases can be derived to reduce the total number of test cases that must be developed.

a. True 
b. False

ANSWER : a


15 : Boundary value analysis can only be used to do white-box testing.

a. True 
b. False

ANSWER : b
 

16 : Comparison testing is typically done to test two competing products as part of customer market analysis prior to product release.

a. True 
b. False

ANSWER : b

17 : Orthogonal array testing enables the test designer to maximize the coverage of the test cases devised for relatively small input domains.

a. True 
b. False

ANSWER : a

18 : Test case design "in the small" for OO software is driven by the algorithmic detail of
the individual operations.

a. True 
b. False

ANSWER : a

19 : Encapsulation of attributes and operations inside objects makes it easy to obtain object state information during testing.

a. True 
b. False

ANSWER : b

20 : Use-cases can provide useful input into the design of black-box and state-based tests of OO software.

a. True 
b. False

ANSWER : a

21 : Fault-based testing is best reserved for

a. conventional software testing
b. operations and classes that are critical or suspect
c. use-case validation
d. white-box testing of operator algorithms

ANSWER : b

22 : Testing OO class operations is made more difficult by

a. encapsulation 
b. inheritance 
c. polymorphism 
d. both b and c

ANSWER : d

23 : Scenario-based testing

a. concentrates on actor and software interaction
b. misses errors in specifications
c. misses errors in subsystem interactions
d. both a and b

ANSWER : a

24 : Deep structure testing is not designed to

a. examine object behaviors
b. exercise communication mechanisms
c. exercise object dependencies
d. exercise structure observable by the user

ANSWER : d

25 : Random order tests are conducted to exercise different class instance life histories.

a. True 
b. False

ANSWER : a
 

26 : Which of these techniques is not useful for partition testing at the class level

a. attribute-based partitioning
b. category-based partitioning
c. equivalence class partitioning
d. state-based partitioning

ANSWER : c

27 : Multiple class testing is too complex to be tested using random test cases.

a. True 
b. False

ANSWER : b

28 : Tests derived from behavioral class models should be based on the 

a. data flowdiagram
b. object-relation diagram
c. state diagram
d. use-case diagram

ANSWER : c

29 : Client/server architectures cannot be properly tested because network load is highly variable.

a. True 
b. False

ANSWER : b

30 : Real-time applications add a new and potentially difficult element to the testing mix

a. performance 
b. reliability 
c. security 
d. time

ANSWER : d


1) The approach/document used to make sure all the requirements are covered when writing test cases
a) Test Matrix
b) Checklist
c) Test bed
d) Traceablity Matrix

2) Executing the same test case by giving the number of inputs on same build called as
a) Regression Testing
b) ReTesting
c) Ad hoc Testing
d) Sanity Testing

3) Control Charts is a statistical technique to assess, monitor, and maintain the stability of a process.
a) True
b) False

4) To check whether we are developing the right product according to the customer requirements are not. It is a static process
a) Validation
b) Verification
c) Quality Assurance
d) Quality Control

5) To check whether we have developed the product according to the customer requirements r not. It is a Dynamic process.
a) Validation
b) Verification
c) Quality Assurance
d) Quality Control

6) Staff development plan describes how the skills and experience of the project team members will be developed.
a) True
b) False

7) It is a set of levels that defines a testing maturity hieraecy
a) TIM (Testing Improving Model)
b) TMM (Testing Maturity Model)
c) TQM(Total Quality Management)

8) A Non-Functional Software testing done to check if the user interface is easy to use and understand
a) Usability Testing
b) Security Testing
c) Unit testing
d) Block Box Testing

9) The review and approved document (i.e. Test plan, System Requirement Specification’s) is called as
a) Delivery Document
b) Baseline Document
c) Checklist

10) What are the Testing Levels?
a) Unit Testing
b) Integration Testing
c) System Testing and Acceptance Testing.
d) All the above

11) Cost of quality = Prevention Cost + Appraisal cost + Failure cost
a) True
b) False

12) A useful tool to visualize, clarify, link, identify, and classify possible cause of a problem. This is also called as “fishbone diagram” what is this?
a) Pareto Analysis
b) Cause-and-Effect Diagram

13) It measures the quality of processes used to create a quality product.
It is a system of management activities,
It is a preventive process, It applies for entire life cycle & Deals with Process.
a) Validation
b) Verification
c) Quality Assurance
d) Quality Control

14) Variance from product specifications is called?
a) Report
b) Requirement
c) Defect

15) Verification is
a) Process based
b) Product based

16) White box testing is not called as___________
a) Glass box testing
b) Closed box testing
c) OPen box testing
d) Clear box testing

17) Name the events that will be analyzed, Count the named incidents, Rank the count by frequency using a bar chart & Validate reasonableness of the analysis is called as
a) Pareto Analysis
b) Cause and Effect Diagram
c) SWOT Analysis
d) Pie Charts

18) Retesting of a single program or component after a change has been made?
a) Full Regression Testing
b) Unit Regression
c) Regional Regression
d) Retesting

19) Requirement and Analysis, Design, Development or Coding, Testing and Maintenance is called as Software Development Life Cycle (SDLC )
a) True
b) False

20) The testing which is done by going thro' the code is known as,
a) Unit Testing
b) Blackbox testing
c) White box Testing
d) Regression testing

21) Configuration Management Plan describes the Configuration Management procedures and structures to be used.
a) True
b) False

22)This type of testing method attempts to find incorrect or missing functions, errors in data structures or external database access, interface errors, Performance errors and initialization and Termination errors. It is called as
a) White Box Testing
b) Grey Box Testing
c) Black Box Testing
d) Open Box Testing

23) Phase Definition. It will come under
a) CMM Level 1
b) CMM Level 2
c) None

24) Software testing which is done without planning and Documentation is known as
a) adHoc Testing
b) Unit Testing
c) Regression testing
d) Functional testing.

25) Acceptance testing is known as
a) Beta Testing
b) Greybox testing
c) Test Automation
d) White box testing

26) Retesting the entire application after a change has been made called as?
a) Full Regression Testing
b) Unit Regression
c) Regional Regression
d) Retesting

27) Boundary value analysis belongs to which testing method?
a) Black Box testing
b) White Box testing

28) It measures the quality of a product
It is a specific part of the QA procedure, It is a corrective process,
It applies for particular product & Deals with the product.
a) Validation
b) Verification
c) Quality Assurance
d) Quality Control

29) What are the Types of Integration Testing?
a) Big Bang Testing
b) Bottom Up Testing
c) Top Down Testing
d) All the above

30) Product Risk affects The quality or performance of the software.
a) True
b) False

31) A metric used to measure the characteristic of documentation and code called as
a) Process metric
b) Product Metric
c) Test metrics

32) Which is non-functional software testing?
a) Unit Testing
b) Block box testing
c) Performance Testing
d) Regression testing

33) The process that deals with the technical and management issues of software development called as?
a) Delivery Process
b) Testing Process
c) Software Process

34) Executing the same test case on a modified build called as
a) Regression Testing
b) Retesting
c) Ad hoc Testing
d) Sanity Testing

35) Which is Black-Box Testing method?
a) equivalence partitioning
b) code coverage
c) fault injection

36) Business Risk affects The Organization developing or Procuring the software.
a) True
b) False

37) Stratification is a Technique used to analyze/divide a universe of data into homogeneous groups(strata).
a) True
b) False

38) Automation Testing should be done before starting Manual testing.

Is the above statement correct?
a) Yes
b) No

39) Earlier a defect is found the cheaper it is to fix it.

Is the above statement correct?
a) Yes
b) No

40) Informing to the developer which bug to be fix first is called as
a) Severity
b) Priority
c) Fix ability
d) Traceability

41) Software Testing is a process of evaluating a system by manual or automatic means and verify that it satisfies specified requirements or identity differences between expected and actual results.
a) True
b) False

42) Retesting modules connected to the program or component after a change has been made?
a) Full Regression Testing
b) Unit Regression
c) Regional Regression
d) Retesting.

43) An Important metric is the number of defects found in internal testing compared to the defects found in customer tests, Status of test activities against the plan, Test coverage achieved so far, comes under
a) Process Metric
b) Product Metric
c) Test Metric

44) Alpha testing will be done at,
a) User's site
b) Developers' site

45) SPICE Means
a) Software Process Improvement and Capability Determination
b) Software Process Improvement and Compatibility Determination.
c) Software Process Invention and Compatibility Determination.
d) Software Process Improvement and Control Determination

46) Requirements Specification, Planning, Test case Design, Execution,
Bug Reporting & Maintenance This Life Cycle comes Under
a) SDLC
b) STLC
c) SQLC
d) BLC

47) It provides a set of levels and an assessment model, and presents a set of recommended practices that allow organizations to improve their testing processes.
a) TIM (Testing Improving Model)
b) TMM (Testing Maturity Model)
c) TQM(Total Quality Management)

48) Standards and procedures for managing changes in an evolving software product is called?
a) Confirmation Management
b) Confederation Mangement
c) Configuration Management
d) Compartability Management

49) Path Tested = Number of Path Tested / Total Number of Paths
a) True
b) False

50) This Testing Technique examines the basic program structure and it derives the test data from the program logic; Ensuring that all statements and conditions executed at least once. It is called as
a) Block box Testing
b) White box Testing
c) Grey Box Testing
d) Closed Box Testing

51) This type of test include, how well the user will be able to understand and interact with the system?
a) Usability Testing
b) User Acceptance Testing
c) Alpha Testing
d) Beta Testing.

52) Defects generally fall into the following categories?
a) WRONG
b) MISSING
c) EXTRA
d) All the above

53) What is correct Software Process Cycle?
a) Plan(P)------>Check(C)------>Act(A)----->Do(D)
b) Plan(P)------>Do(D)------>Check(C)----->Act(A)
c) Plan(P)------>Do(D)------>Act(A)----->Check(C)

54) Conducted to validate that the application, database, and network they may be running on can handle projected volumes of users and data effectively. The test is conducted jointly by developers, testers, DBA’s and network associates after the system Testing called as
a) Functional Testing
b) Stress/Load Testing
c) Recovery Testing
d) Integration Testing

55) Maintenance Plan predicts the maintenance requirements of the system, maintenance costs and effort required
a) True
b) False

56) Beta testing will be done by
a) Developer
b) User
c) Tester

57) Validation plan describes the approach ,resources and schedule used for system validation
a) True
b) False

58) Integration, It will come under
a) CMM Level 1
b) CMM Level 3
c) CMM Level 2
d) None

59) Types of quality tools are Problem Identification Tools and Problem Analysis Tools.
a) True
b) False

60) Which Software Development Life cycle model will require to start Testing Activities when starting development activities itself
a) Water falls model
b) Spiral Model
c) V-model
d) Linear model

61) A metric used to measure the characteristic of the methods, Techniques and tools employed in developing, implementing and maintaining the software system called as
a) Process metric
b) Product Metric
c) Test metrics

62) Check Sheet(Checklist) is considered a simple , but powerful statistical tool because it differentiates between two extremes.
a) True
b) False

63) Application should be stable. Clear Design and Flow of the application is needed for Automation testing.
a) False
b) True

64) Quality plan describes the quality procedures and standards that will be used in a project.
a) False
b) True

65) How severely the bug is effecting the application is called as
a) Severity
b) Priority
c) Fix ability
d) Traceability

66) Project Risk affects The Schedule or Resources.
a) True
b) False

67) The name of the testing which is done to make sure the existing features are not affected by new changes
a) Recursive testing
b) Whitebox testing
c) Unit testing
d) Regression testing

68) Management and Measurement, It will come under
a) CMM Level 1
b) CMM Level 3
c) CMM Level 4
d) CMM Level 2

69) AdHoc testing is a part of
a) Unit Testing
b) Regression Tesing
c) Exploratory Testing
d) Performance Testing

70) Cost of Production = Right The First time cost(RTF) + Cost of Quality.
a) True
b) False

71) ------------- means under what test environment(Hardware, software set up) the application will run smoothly
a) Test Bed
b) Checkpoint
c) Code Walk through
d) Checklist

72) TQM represents
a) Tool Quality Management
b) Test Quality Manager
c) Total Quality Management
d) Total Quality Manager

73) Optimization, Defect Prevention, and Quality Control. Its come under the
a) CMM Level 2
b) CMM Level 3
c) CMM Level 4
d) CMM Level5

74) Unit Testing will be done by
a) Testers
b) End Users
c) Customer
d) Developers

75) Beta testing will be done at
a) User place
b) Developers place

76) A Plan to overcome the risk called as
a) Migration Plan
b) Master plan
c) Maintenance plan
d) Mitigation Plan
Show Answer

77) Splitting project into tasks and estimate time and resources required to complete each task called as Project Scheduling
a) True
b) False
Show Answer


(1) d
(2) b
(3) a
(4) b
(5) a
(6) a
(7) b
(8) a
(9) b
(10) d
(11) a
(12) b
(13) c
(14) c
(15) a
(16) b
(17) a
(18) b
(19) a
(20) c
(21) a
(22) c
(23) b
(24) a
(25) a
(26) a
(27) a
(28) d
(29) d
(30) a
(31) b
(32) c
(33) c
(34) a
(35) a
(36) a
(37) a
(38) b
(39) a
(40) b
(41) a
(42) c
(43) c
(44) b
(45) a
(46) b
(47) a
(48) c
(49) a
(50) b
(51) a
(52) d
(53) b
(54) b
(55) a
(56) b
(57) a
(58) b
(59) a
(60) c
(61) a
(62) a
(63) b
(64) b
(65) a
(66) a
(67) d
(68) c
(69) c
(70) a
(71) a
(72) c
(73) d
(74) d
(75) a
(76) d
(77) a

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………

No comments:

Post a Comment