Interview based questions and answers - Manual Testing Notes By ShariqSP
Fundamental Questions and Suggested Answers for Manual Testing Interviews
1. What is manual testing, and why is it important?
Answer: Manual testing is the process of manually executing test cases without the use of automation tools to identify defects in a software application. It is important because it allows testers to understand the user’s perspective and detect issues that may not be identified through automated scripts, such as usability or UI inconsistencies. Manual testing is essential, especially in early development stages, as it provides immediate feedback and ensures the software meets user requirements.
2. What are the key differences between manual testing and automation testing?
Answer: Manual testing requires human effort to execute test cases, while automation testing uses tools and scripts to execute tests automatically. Manual testing is suitable for exploratory, usability, and ad hoc testing where human observation is crucial, while automation is ideal for repetitive, regression, and large-scale testing due to its efficiency and speed. Automation testing requires an initial investment in scripting but reduces long-term effort and time, whereas manual testing is more flexible and can adapt quickly to changes.
3. What are the different levels of testing, and why are they important?
Answer: The primary levels of testing are unit testing, integration testing, system testing, and acceptance testing. Each level plays a vital role:
- Unit Testing: Validates individual components for correct behavior.
- Integration Testing: Ensures modules work together as expected.
- System Testing: Tests the entire system’s functionality end-to-end.
- Acceptance Testing: Confirms the software meets business requirements and is ready for deployment.
4. Define and differentiate between verification and validation.
Answer: Verification and validation are both processes for ensuring quality in software.
- Verification: Involves evaluating the product’s design and requirements through reviews, walkthroughs, and inspections to ensure the software is being built correctly.
- Validation: Involves testing the actual software to ensure it meets user expectations and requirements, confirming that the “right” product has been built.
5. What is exploratory testing, and when should it be used?
Answer: Exploratory testing is an unscripted, hands-on approach where testers explore the software without pre-defined test cases, relying on their intuition and experience to identify issues. It is particularly useful when there is limited documentation, for new features, or for detecting unusual or complex defects that may not be caught by scripted tests. Exploratory testing is also helpful in identifying usability issues and understanding the application from a user’s perspective.
6. Explain the V-model in software development and how it relates to manual testing.
Answer: The V-model (Verification and Validation model) is a software development model that emphasizes the relationship between development stages and corresponding testing phases. Each development phase has a corresponding testing phase, creating a “V” shape. For example, requirements gathering is followed by acceptance testing, design by system testing, and module design by unit testing. This model ensures that testing is planned in parallel with development, allowing early detection of defects and alignment between development and testing goals.
7. What is a test plan, and what are its key components?
Answer: A test plan is a document outlining the objectives, scope, approach, and focus of software testing. Key components include:
- Scope: Defines the areas and features to be tested.
- Objectives: States the purpose and goals of testing.
- Resources and Roles: Lists the team members and resources allocated to testing.
- Schedule: Provides a timeline for testing activities.
- Test Environment: Describes the hardware, software, and configurations needed for testing.
- Risk and Mitigation: Identifies potential risks and how they will be managed.
8. What is a test case, and how do you write an effective test case?
Answer: A test case is a set of conditions or steps used to verify a specific feature or functionality of an application. To write an effective test case:
- Make it clear and concise so that any tester can understand and execute it.
- Define the preconditions, inputs, steps, expected result, and postconditions.
- Include both positive and negative scenarios for thorough coverage.
- Use descriptive titles that summarize the test’s purpose.
9. What is a test scenario, and how is it different from a test case?
Answer: A test scenario is a high-level description of what needs to be tested, typically representing a particular feature or user action. It focuses on “what” to test rather than the specific steps to take. In contrast, a test case is more detailed and includes the specific steps, inputs, and expected results. For example, a test scenario might be “Validate user login functionality,” while test cases would detail steps for valid login, invalid login, and edge cases.
10. Define the different types of testing (e.g., functional, non-functional, regression, smoke, sanity).
Answer: Various types of testing serve different purposes:
- Functional Testing: Ensures that each feature of the application works according to requirements.
- Non-Functional Testing: Tests aspects like performance, scalability, and usability.
- Regression Testing: Validates that new changes do not affect existing functionality.
- Smoke Testing: Preliminary testing to check if the critical features are working before detailed testing begins.
- Sanity Testing: A quick test to check if a particular function or bug fix works as expected.
Real-World Scenario-Based Questions and Suggested Answers for Manual Testing Interviews
1. If you’re testing an e-commerce website, what test scenarios would you consider for the checkout process?
Answer: For the checkout process on an e-commerce website, I would cover scenarios that include:
- Validating the cart summary, including item details, quantities, and prices.
- Testing different payment methods (credit/debit card, PayPal, etc.).
- Applying promo codes or discounts and verifying calculations.
- Handling shipping address selection and addition of new addresses.
- Testing checkout with guest vs. registered user accounts.
- Handling cases of insufficient stock and low inventory warnings.
- Testing order confirmation and receipt generation.
2. How would you test a login feature with multiple user roles (e.g., admin, guest, registered user)?
Answer: For testing a login feature with multiple user roles, I would:
- Test successful login for each user role (admin, guest, registered user).
- Verify access permissions based on role after login (e.g., admins have access to all areas, while guests have limited access).
- Attempt invalid login with incorrect credentials for each role.
- Test account lockout after multiple failed login attempts.
- Check password recovery for registered users.
- Ensure session handling (e.g., auto-logout after inactivity) works as expected.
3. Describe how you would test an ATM machine for both functional and non-functional requirements.
Answer: For functional testing of an ATM, I would include:
- Testing card insertion, PIN validation, and balance inquiry functions.
- Performing withdrawal, deposit, and fund transfer operations.
- Testing receipt printing and on-screen transaction confirmation.
- Checking error handling for card expiration, insufficient funds, and invalid PIN.
- Response time, ensuring transactions are processed within an acceptable time frame.
- Stress testing for peak usage times.
- Security checks for data protection and safe logout.
- Accessibility tests, ensuring usability for people with disabilities.
4. For a banking app, how would you test the fund transfer feature? Outline test scenarios.
Answer: Test scenarios for fund transfer in a banking app include:
- Verifying successful fund transfers between accounts within the same bank and to other banks.
- Testing daily transfer limits and error handling for exceeding the limit.
- Validating required fields (recipient name, account number) and format checks.
- Checking for OTP or two-factor authentication during transfer.
- Verifying transfer reversal in case of transaction failure.
- Checking notifications and transaction history updates post-transfer.
5. In a healthcare app, how would you test the appointment booking functionality?
Answer: For testing appointment booking in a healthcare app, I would:
- Test scheduling an appointment with available doctors and available time slots.
- Verify error handling for double booking or unavailable slots.
- Check confirmation notifications via email/SMS after booking.
- Verify rescheduling and cancellation options.
- Ensure that appointment details appear correctly in the user's appointment history.
- Test any dependencies, such as required documents or prior approval needed before booking.
6. Explain how you would test an email sign-up form with mandatory and optional fields.
Answer: For an email sign-up form, I would:
- Verify the form accepts valid emails and rejects invalid email formats.
- Check that mandatory fields (e.g., email, password) cannot be skipped.
- Test optional fields to ensure they don’t impact form submission if left blank.
- Validate password strength, length, and special character requirements.
- Check for error messages for incorrect or missing inputs.
- Verify if the user receives a verification email upon successful sign-up.
7. How would you derive test cases for a calculator application? Give examples of functional and edge cases.
Answer: For a calculator app, I would consider:
- Basic operations: addition, subtraction, multiplication, division.
- Edge cases, like dividing by zero, calculating large numbers, and handling negative values.
- Validating order of operations (e.g., BODMAS rule).
- Testing with decimal values, fractions, and non-numeric inputs to check error handling.
8. Describe test scenarios for a mobile shopping app's search functionality.
Answer: For search functionality in a shopping app, I would:
- Test keyword-based searches to ensure relevant products are returned.
- Verify handling of spelling errors or partial keywords.
- Check sorting and filtering options on search results.
- Validate product search by category or brand.
- Ensure error messages are displayed for no results found.
- Test search performance and response time under heavy load.
9. What test cases would you write for a payment gateway integration on an e-commerce site?
Answer: For payment gateway testing, I would consider:
- Testing successful payment using multiple methods (credit/debit cards, net banking, wallets).
- Verifying error handling for incorrect card details or expired cards.
- Testing session timeouts and secure redirection for payments.
- Testing for successful transaction status updates on the order.
- Checking cancellation or refund functionality in case of failed transactions.
- Ensuring security protocols, such as encryption, are applied during transactions.
10. For a ride-sharing app, outline test cases for booking a ride.
Answer: For a ride-sharing app, I would outline the following test cases:
- Booking a ride with valid pickup and destination addresses.
- Selecting ride types (e.g., standard, premium, shared) and validating costs.
- Testing driver availability and estimated arrival time updates.
- Checking for correct fare calculations, including surcharges and discounts.
- Testing cancellation options and fee calculations for late cancellations.
- Verifying notifications (e.g., driver arrival, trip start, and trip end).
Test Case Design & Writing Questions and Suggested Answers for Manual Testing Interviews
1. How do you prioritize test cases for execution?
Answer: I prioritize test cases based on factors like business impact, critical functionalities, and defect-prone areas. High-priority test cases cover core functionalities and critical paths that are most visible to the end-users. I also prioritize based on risk assessment, focusing first on areas with the highest likelihood of failure. For example, if testing an e-commerce site, checkout and payment functionalities would have higher priority than UI elements.
2. Explain the difference between positive and negative test cases with examples.
Answer: Positive test cases validate that the system functions correctly when given valid input (e.g., logging in with correct username and password). Negative test cases check how the system handles invalid input or unexpected conditions (e.g., attempting to log in with an incorrect password). Positive testing confirms expected behavior, while negative testing ensures the system handles errors gracefully.
3. What are boundary value analysis (BVA) and equivalence partitioning, and how are they used in writing test cases?
Answer: BVA and equivalence partitioning are techniques to reduce test cases while maximizing coverage.
- Boundary Value Analysis (BVA): Tests the edges of input ranges, as errors are often found at boundary values. For example, if an age field accepts 18-60, I’d test with values like 17, 18, 60, and 61.
- Equivalence Partitioning: Divides input data into partitions where all values should behave similarly. For instance, testing with one valid value (e.g., 20) from the 18-60 age range is often sufficient, reducing redundant tests.
4. How would you write test cases for an elevator system? Describe positive and negative scenarios.
Answer: For an elevator system, I would design test cases covering various scenarios:
- Positive Scenarios: Verify that the elevator moves to selected floors, stops accurately, and doors open and close correctly at each floor.
- Negative Scenarios: Test for invalid inputs (e.g., selecting non-existent floors), check responses to emergency stops, and simulate overload conditions to ensure the system displays an overload warning and doesn’t operate.
5. If you were to test a file upload feature, what test cases would you write?
Answer: For a file upload feature, I would consider test cases such as:
- Testing file uploads within allowed size and format limits.
- Uploading files that exceed size limits and unsupported formats to verify error handling.
- Testing response for empty file uploads.
- Verifying upload progress, especially for larger files.
- Testing functionality for drag-and-drop, if available.
6. Describe test cases you would consider for a password reset functionality.
Answer: For password reset functionality, I’d consider:
- Requesting password reset with a registered email and verifying OTP or reset link delivery.
- Testing the OTP expiration after a set time, ensuring a new OTP can be requested.
- Ensuring appropriate error messages for unregistered emails and incorrect OTP entries.
- Verifying password reset requirements (e.g., minimum length, complexity).
7. How would you approach testing a social media app’s notification feature?
Answer: For a social media app's notification feature, I’d test:
- Notification delivery for different events (likes, comments, follows).
- Accuracy and relevance of notification content (e.g., correct username and post).
- Notification handling settings (e.g., disabling specific notifications).
- Testing notification badge updates and in-app displays.
8. What test scenarios would you write for testing a search engine?
Answer: For a search engine, I’d consider:
- Testing keyword searches to ensure relevant results are returned.
- Handling misspellings, suggesting alternatives, and verifying response time.
- Testing filter and sorting options in results.
- Checking search functionality with special characters and long queries.
9. Write test cases for a “Forgot Password” feature with OTP-based verification.
Answer: Test cases for a “Forgot Password” feature include:
- Requesting OTP and ensuring it is received in email/SMS for registered users.
- Entering incorrect or expired OTP and verifying error handling.
- Testing password reset fields with complexity and confirmation checks.
- Ensuring successful reset redirects the user to the login screen with a confirmation message.
10. How do you ensure your test cases are clear, complete, and maintainable?
Answer: I follow a few key principles:
- Writing test cases in clear, concise language with step-by-step instructions.
- Including only necessary details to avoid excessive maintenance while covering essential information like preconditions and expected results.
- Regularly reviewing and updating test cases as the application changes, ensuring relevance and accuracy.
Defect Tracking & Reporting Questions
1. What are the common stages in the defect life cycle?
In the defect life cycle, a defect passes through several stages that ensure it is identified, tracked, and resolved effectively. Typically, the stages include:
- New: When a defect is first identified and logged in the system.
- Assigned: The defect is reviewed and assigned to a developer or relevant team member for investigation.
- Open: The developer starts analyzing and working on fixing the defect.
- Fixed: The developer has applied a fix, and it is ready for testing.
- Retest: QA re-tests the defect to verify the fix works as expected.
- Verified: If the defect is fixed and no longer reproducible, it is marked as verified.
- Closed: The defect is resolved and closed if it passes all testing.
- Reopen: If the defect reappears during testing, it may be reopened for further analysis.
2. How do you write an effective defect report? What key details do you include?
An effective defect report should provide clear and concise information to help developers understand and resolve the issue. Key details include:
- Title: A brief, descriptive title that summarizes the issue.
- Description: A detailed description of the defect, including what is expected versus the actual result.
- Steps to Reproduce: Clear and numbered steps that guide the developer to reproduce the defect.
- Severity & Priority: Severity reflects the impact on the application, while priority indicates urgency for fixing.
- Environment Details: Information about the environment (e.g., browser, OS, version) where the defect was found.
- Screenshots or Videos: Visual evidence that helps developers understand the issue faster.
3. Explain the severity and priority of a defect with examples.
Severity indicates the defect's impact on the application, while priority indicates the urgency of fixing it.
- High Severity, Low Priority: A rare crash in an advanced feature might be severe but low priority if it’s seldom used.
- High Severity, High Priority: A crash on the login screen prevents any user from accessing the application, making it critical.
- Low Severity, High Priority: A typo in the homepage header could be a minor issue but should be fixed immediately for user experience.
4. What steps do you take if a developer disagrees with the validity of a defect you reported?
If a developer disagrees with a defect, I ensure clear communication and recheck my findings. Steps include:
- Gather additional evidence, such as logs or screenshots, to support the defect report.
- Arrange a meeting to discuss the defect and demonstrate it in the environment where it was found.
- Involve a team lead or a third party if necessary for an objective assessment.
5. If you find a critical defect in the final testing phase, how would you handle it?
In this scenario, I would immediately communicate with relevant stakeholders. Actions include:
- Assessing the defect’s impact and determining if a workaround is possible.
- Escalating to project managers and discussing potential release impacts.
- Collaborating with the developer to find a quick resolution or schedule a fix in a future release if appropriate.
6. What is the role of defect triage meetings, and what is typically discussed?
Defect triage meetings prioritize defects based on severity, impact, and urgency. During these meetings:
- Defects are reviewed and prioritized for resolution based on their importance.
- Stakeholders discuss potential impacts and decide which defects to address immediately.
- Resource allocation and timelines for defect resolution are also decided.
7. How would you handle a situation where a defect is marked as “cannot be reproduced”?
For a defect that cannot be reproduced, I would:
- Revalidate the defect on different configurations or environments.
- Gather additional logs, screenshots, and detailed steps for better context.
- If still irreproducible, document all findings and monitor for any future occurrences.
8. Describe a time when you found a defect that was challenging to reproduce. How did you handle it?
In one project, I encountered an intermittent issue where the application crashed under specific conditions. I:
- Repeated testing in various scenarios to identify a pattern.
- Utilized logging tools to capture system states when the issue occurred.
- Collaborated with the developer to simulate different load conditions, ultimately finding the root cause in a rare server configuration issue.
9. What is root cause analysis in defect management, and why is it important?
Root cause analysis (RCA) identifies the underlying cause of defects to prevent recurrence. RCA is critical as it:
- Helps in understanding patterns and preventing similar issues in the future.
- Improves development processes by identifying areas for improvement.
- Increases the overall quality and reliability of the product by addressing core issues.
10. Explain how you track and manage defects using any bug tracking tool (e.g., Jira, Bugzilla).
I have experience using Jira to manage and track defects. My approach involves:
- Creating detailed defect reports with all necessary information and categorizing defects based on severity and priority.
- Regularly updating the status of defects as they progress through different stages.
- Attending defect triage meetings, adding relevant comments or additional information, and ensuring prompt follow-up until closure.
Quality Assurance Process & Best Practices
1. Describe the key principles of manual testing and why they are important.
Manual testing principles ensure thorough, effective testing. Key principles include:
- Understanding Requirements: Knowing requirements helps testers validate that the application meets user needs.
- Test Planning: Planning defines objectives, resources, and timelines, ensuring a structured approach to testing.
- Test Case Design: Well-designed test cases cover various scenarios and improve testing accuracy and coverage.
- Defect Detection: Identifying and documenting defects ensures developers have the information needed to resolve issues.
- Continuous Improvement: Learning from past testing cycles enhances future testing quality and efficiency.
2. What is the purpose of regression testing, and how do you determine which test cases to include in regression?
Regression testing ensures new changes don’t negatively impact existing functionalities. To select regression test cases, I consider:
- Core Functionality: Critical features that must work for the application to be usable.
- High-Risk Areas: Areas most likely to be affected by recent changes.
- Frequently Used Features: Popular features that users access regularly.
- Previously Defective Areas: Areas where bugs were previously found and fixed, as they may be prone to issues again.
3. How do you perform smoke testing, and when is it used in the testing process?
Smoke testing is a quick check to verify that the critical features of an application are working after a new build. I typically:
- Identify core functionalities that must work, like login, navigation, and basic operations.
- Execute test cases for these key areas to confirm that the build is stable.
- Smoke testing is generally used at the beginning of each testing phase to save time by identifying major issues early.
4. What is the “pesticide paradox” in testing, and how do you avoid it?
The "pesticide paradox" occurs when repeated use of the same tests fails to find new defects. To avoid this, I:
- Regularly update test cases to cover new scenarios and edge cases.
- Incorporate exploratory testing to find issues outside predefined test cases.
- Analyze defect trends to modify test cases based on emerging patterns.
5. Describe the role of a manual tester in an agile environment.
In an agile environment, a manual tester collaborates with developers and stakeholders to deliver quality in short, iterative cycles. Key responsibilities include:
- Participating in sprint planning and reviewing user stories to understand requirements.
- Designing and executing test cases within each sprint cycle to ensure timely feedback.
- Conducting exploratory testing to find defects early and communicate issues to the team for quick fixes.
- Actively contributing to retrospectives to improve processes and testing practices over time.
6. Explain the concept of “shift-left” testing and its importance in QA.
"Shift-left" testing involves starting testing earlier in the development lifecycle, typically during the requirements and design phases. It’s important because:
- Early detection of defects reduces the cost and time required for fixes.
- Involvement in the requirements phase helps prevent misunderstandings and ensures test cases align with requirements.
- Shift-left testing allows QA to provide continuous feedback, fostering a higher-quality end product.
7. How do you ensure that your testing process aligns with user requirements?
To align testing with user requirements, I focus on:
- Reviewing and validating requirements to create accurate test cases that meet user needs.
- Incorporating user personas and scenarios to test from the user’s perspective.
- Regularly communicating with stakeholders to ensure any changes in requirements are reflected in the tests.
- Using traceability matrices to ensure test cases cover all specified requirements.
8. What is risk-based testing, and how do you prioritize test cases using this approach?
Risk-based testing prioritizes test cases based on the likelihood and impact of failures. I prioritize by:
- Identifying high-risk areas where failures would have significant consequences.
- Focusing on critical functionality first, such as payment processing or data security.
- Collaborating with stakeholders to assess which areas are most important to users and business objectives.
9. In what situations would you conduct ad hoc testing, and how is it valuable?
Ad hoc testing is conducted when structured testing might miss unusual issues. It’s valuable because:
- It allows testers to explore the application intuitively, often uncovering hidden or edge-case defects.
- It’s useful in situations where time is limited, and a quick quality check is needed.
- Ad hoc testing complements structured testing by covering scenarios that predefined test cases might not include.
10. How do you ensure test coverage, and what metrics do you use to measure it?
Ensuring test coverage involves creating test cases that comprehensively validate requirements. I use the following metrics:
- Requirements Coverage: Verifies that all requirements have corresponding test cases.
- Code Coverage: Measures the extent to which code is tested, using tools to identify untested areas.
- Defect Density: Tracks the number of defects per module to identify high-risk areas.
- Test Execution Rate: Shows the percentage of tests executed, helping ensure all planned cases are covered.