Extent Report - Notes By ShariqSP

Extent Report

Reporting is a critical aspect of the software testing lifecycle. It provides a clear and structured summary of the testing outcomes, helping teams understand the current quality of the application. Effective reports allow developers, testers, project managers, and other stakeholders to quickly assess the status of the project and the success of the test cases.

Reports in testing can be broadly categorized into two types: High-Level Reports and Low-Level Reports. Each serves a specific purpose depending on the audience and the level of detail required.

1. High-Level Report

A High-Level Report provides an overview of the testing process, focusing on the overall status of the test cases and their outcomes. It is typically used by project managers, business stakeholders, and other non-technical personnel who need a quick summary of the project’s health without diving into the technical details. These reports are concise, focusing on the most critical metrics, such as the number of test cases executed, passed, failed, or skipped.

When to Use a High-Level Report:

  • When reporting to stakeholders or business executives who are not directly involved in the technical aspects of testing.
  • When presenting a general status update during project meetings.
  • When summarizing the progress of testing across multiple testing phases (e.g., functional testing, regression testing).

Key Elements of a High-Level Report:

  • Total number of test cases executed.
  • Number of test cases passed, failed, or skipped.
  • Percentage of success and failure.
  • High-level risk and defect summary.
  • Test coverage (e.g., modules or features covered).

Real-Time Scenario:

For a retail e-commerce website, a high-level report would summarize the following:

  • Total Test Cases: 200
  • Test Cases Passed: 180
  • Test Cases Failed: 15
  • Test Cases Skipped: 5
  • Pass Rate: 90%
  • Critical Defects: 2 (Checkout Failure, Payment Gateway Issue)
  • Test Coverage: 95% of all high-priority features.
This report would be shared with business stakeholders to give them an overview of how well the core functionalities of the website, like login, product search, and checkout, are performing without delving into the technical reasons for failures.

2. Low-Level Report

A Low-Level Report is a detailed, technical report that dives deep into each test case's execution. It is primarily used by developers, testers, and technical leads to understand the specific details of test failures, logs, and steps to reproduce issues. Low-level reports focus on the test execution details, error logs, screenshots, and exact failure points.

When to Use a Low-Level Report:

  • When reporting detailed technical information to developers or other technical stakeholders.
  • When analyzing test failures to debug or find root causes of issues.
  • When documenting test case execution for auditing or compliance purposes.

Key Elements of a Low-Level Report:

  • Detailed information on each test case execution (steps, input, output).
  • Error logs and stack traces for failed test cases.
  • Exception handling information (if applicable).
  • Test execution time and environment details (browser, OS, version).
  • Screenshots of failures or errors.
  • Links to the related defect or bug tracking system.

Real-Time Scenario:

In the same retail e-commerce website example, a low-level report would provide detailed information about the failed test cases. For instance:

  • Test Case: Checkout Process - Payment Failure
  • Step 1: User adds an item to the cart.
  • Step 2: User clicks on checkout.
  • Step 3: User selects credit card payment method and clicks "Pay Now."
  • Error: PaymentGatewayTimeoutException
  • Screenshot: [link to screenshot showing the error]
  • Execution Time: 45 seconds
  • Environment: Chrome 89.0, Windows 10
  • Log File: [link to log file]
  • Bug ID: #1013 - Payment Gateway Timeout
This report would be sent to the development team to analyze the exact cause of the failure (e.g., a payment gateway timeout issue) and to take action to resolve it.

High-Level vs. Low-Level Report: When to Use Which

Both high-level and low-level reports are necessary for different stages of the testing process. High-level reports are ideal for giving stakeholders a quick overview of the testing progress, especially during sprint reviews or project updates. Low-level reports, on the other hand, are more appropriate for debugging, issue tracking, and detailed analysis by the testing and development teams.

  • Use High-Level Reports during status meetings, sprint reviews, and when reporting to business or non-technical stakeholders.
  • Use Low-Level Reports for technical discussions, root cause analysis, bug fixes, and when collaborating with the development team.

How to Write a Good Report

Whether you're writing a high-level or low-level report, the key is clarity, accuracy, and completeness. A good report should:

  • Be clear and concise: Avoid unnecessary jargon and focus on delivering actionable insights.
  • Include all relevant information: Ensure that the report contains everything required for the audience, whether technical (logs, stack traces) or non-technical (metrics, summary).
  • Be organized: Structure the report logically with headings, subheadings, and bullet points to make it easy to read and understand.
  • Be visual: Include graphs, charts, or screenshots where necessary to make the report more engaging and easier to interpret.

Example of a High-Level Report:

Title: E-commerce Website Test Summary Total Test Cases: 200 Pass: 180 Fail: 15 Skipped: 5 Critical Defects: 2 Test Coverage: 95% Pass Rate: 90% Next Steps: Resolve critical defects by the next sprint.

Example of a Low-Level Report:

Title: Checkout Process - Payment Failure Test Case ID: TC-1234 Environment: Chrome 89.0, Windows 10 Steps:

  1. User adds an item to the cart.
  2. User clicks on checkout.
  3. User selects credit card payment method and clicks "Pay Now."
Error: PaymentGatewayTimeoutException Screenshot: [link] Bug ID: #1013 - Payment Gateway Timeout Log: [link to log file] Execution Time: 45 seconds

Extent Reports in Testing

Extent Reports is a powerful and flexible reporting library used in automated testing frameworks, particularly with Selenium. It generates interactive, detailed, and visually appealing reports that help track the status of the tests executed. Extent Reports are popular because of their ease of integration, customizability, and ability to include rich media such as screenshots and videos within the reports.

Key Features of Extent Reports

  • Supports various formats such as HTML, PDF, and email reports.
  • Can capture logs, errors, and exceptions, along with screenshots and videos.
  • Provides visual insights using graphs, charts, and categorized test results.
  • Allows adding custom information, such as test case descriptions, tags, and execution times.
  • Integration with Selenium WebDriver for real-time reporting.

Why Extent Reports is Important

Extent Reports helps in visualizing the test execution in an interactive way, making it easy to identify failures and their causes. It offers a high level of customizability and supports media-rich reports, which helps in improving the overall reporting experience.

When to Use Extent Reports:

  • When detailed, visually rich reports are required for stakeholders.
  • When you need to include screenshots of failures or key actions during test execution.
  • When running complex test scenarios where understanding the results at a granular level is important.

Real-Time Scenario:

In a scenario where you are testing a banking application, Extent Reports can be used to generate reports after running end-to-end test cases, such as user login, fund transfers, and bill payments. The report can include:

  • Test Case Name: "Fund Transfer Test"
  • Status: Passed
  • Execution Time: 2 minutes 15 seconds
  • Screenshot: [Image of the successful fund transfer confirmation]
  • Logs: "Fund transfer executed successfully, response time: 1.2 seconds"
These insights provide the stakeholders with not only the outcome but also key performance metrics and visual proof of successful or failed executions.

How to Write an Extent Report

Integrating Extent Reports in your test automation framework typically involves the following steps:

  1. Initialize the ExtentReports object and create a report file in your framework's setup method.
  2. Create test cases (tests) and log the status of each test step (pass, fail, skip, etc.).
  3. Attach screenshots for failed steps or key actions during the test.
  4. Flush the report at the end of execution to generate the final output.

Extent Report in Selenium Testing

Extent Reports are advanced reporting tools used to generate interactive and visually appealing reports for Selenium automation test results. This section explains how to integrate and use Extent Reports in Selenium with a detailed example.

Required Maven Dependencies

Add the following dependencies in your pom.xml if using Maven:

<dependencies>
                <!-- Selenium Java -->
                <dependency>
                    <groupId>org.seleniumhq.selenium</groupId>
                    <artifactId>selenium-java</artifactId>
                    <version>4.12.1</version>
                </dependency>
            
                <!-- Extent Reports -->
                <dependency>
                    <groupId>com.aventstack</groupId>
                    <artifactId>extentreports</artifactId>
                    <version>5.0.9</version>
                </dependency>
            </dependencies>
                

Complete Java Code for Selenium with Extent Reports

import org.openqa.selenium.WebDriver;
            import org.openqa.selenium.chrome.ChromeDriver;
            import org.testng.Assert;
            import org.testng.annotations.AfterClass;
            import org.testng.annotations.BeforeClass;
            import org.testng.annotations.Test;
            
            import com.aventstack.extentreports.ExtentReports;
            import com.aventstack.extentreports.ExtentTest;
            import com.aventstack.extentreports.Status;
            import com.aventstack.extentreports.reporter.ExtentHtmlReporter;
            import com.aventstack.extentreports.reporter.configuration.Theme;
            
            public class ExtentReportExample {
            
                ExtentReports extent;
                ExtentHtmlReporter htmlReporter;
                ExtentTest test;
                WebDriver driver;
            
                @BeforeClass
                public void setup() {
                    htmlReporter = new ExtentHtmlReporter("test-output/ExtentReport.html");
                    htmlReporter.config().setTheme(Theme.STANDARD);
                    htmlReporter.config().setDocumentTitle("Selenium Test Report");
                    htmlReporter.config().setReportName("Extent Report Demo");
            
                    extent = new ExtentReports();
                    extent.attachReporter(htmlReporter);
                    extent.setSystemInfo("Tester", "Your Name");
                    extent.setSystemInfo("Browser", "Chrome");
            
                    driver = new ChromeDriver();
                }
            
                @Test
                public void testGoogleTitle() {
                    test = extent.createTest("Google Title Test", "Validate the title of Google homepage");
                    driver.get("https://www.google.com");
                    test.log(Status.INFO, "Navigated to Google");
            
                    String title = driver.getTitle();
                    test.log(Status.INFO, "Title fetched: " + title);
            
                    try {
                        Assert.assertEquals(title, "Google");
                        test.pass("Title validation passed");
                    } catch (AssertionError e) {
                        test.fail("Title validation failed: " + e.getMessage());
                    }
                }
            
                @AfterClass
                public void tearDown() {
                    driver.quit();
                    test.log(Status.INFO, "Browser closed");
                    extent.flush();
                }
            }
                

Explanation of the Code

  • Maven Dependencies: Selenium and ExtentReports dependencies are required.
  • Extent Report Configuration: HTML reporter is set up with themes and document metadata.
  • System Info: Metadata like tester name and browser are added to the report.
  • Test Execution: A test is created to validate the Google homepage title.
  • Logs and Validation: Logs are generated during the test, and results are added to the report.
  • Flushing the Report: The report is written to the file after the test completes.

Folder Structure of Generated Report

The generated report will have:

  • ExtentReport.html: The HTML report with all test details.
  • Screenshots (Optional): Capture screenshots for failed tests and attach them to the report.

How to Run the Program

  1. Download and install the Chrome browser and ChromeDriver.
  2. Add the ChromeDriver path to your system's PATH variable.
  3. Run the program in your IDE.
  4. Check the test-output/ExtentReport.html file for the report.

Why Use Extent Reports?

  • Interactive HTML Reports with logs and screenshots.
  • Detailed test execution logging.
  • Customizable reports with themes and metadata.
  • Cross-platform compatibility.

Attaching Screenshots in Extent Report with Selenium

Screenshots are crucial for identifying issues during test execution. Attaching screenshots to Extent Reports helps visualize errors and improve test analysis. This section covers a real-time scenario where a screenshot is captured and added to the report if a test fails.

Required Maven Dependencies

Ensure you have the following Maven dependencies in your pom.xml:

<dependencies>
                <!-- Selenium Java -->
                <dependency>
                    <groupId>org.seleniumhq.selenium</groupId>
                    <artifactId>selenium-java</artifactId>
                    <version>4.12.1</version>
                </dependency>
            
                <!-- Extent Reports -->
                <dependency>
                    <groupId>com.aventstack</groupId>
                    <artifactId>extentreports</artifactId>
                    <version>5.0.9</version>
                </dependency>
            </dependencies>
                

Java Code: Attaching Screenshot to Extent Report

This code demonstrates capturing a screenshot when a test fails and attaching it to the Extent Report.

import org.openqa.selenium.OutputType;
            import org.openqa.selenium.TakesScreenshot;
            import org.openqa.selenium.WebDriver;
            import org.openqa.selenium.chrome.ChromeDriver;
            import org.apache.commons.io.FileUtils;
            import org.testng.Assert;
            import org.testng.annotations.AfterClass;
            import org.testng.annotations.BeforeClass;
            import org.testng.annotations.Test;
            
            import com.aventstack.extentreports.ExtentReports;
            import com.aventstack.extentreports.ExtentTest;
            import com.aventstack.extentreports.Status;
            import com.aventstack.extentreports.reporter.ExtentHtmlReporter;
            
            import java.io.File;
            import java.io.IOException;
            import java.text.SimpleDateFormat;
            import java.util.Date;
            
            public class ScreenshotInExtentReport {
            
                WebDriver driver;
                ExtentReports extent;
                ExtentTest test;
            
                @BeforeClass
                public void setup() {
                    ExtentHtmlReporter htmlReporter = new ExtentHtmlReporter("test-output/ExtentScreenshotReport.html");
                    extent = new ExtentReports();
                    extent.attachReporter(htmlReporter);
            
                    driver = new ChromeDriver();
                }
            
                @Test
                public void testGoogleTitleWithScreenshot() {
                    test = extent.createTest("Google Title Test with Screenshot", 
                                              "This test captures a screenshot if the title validation fails.");
            
                    driver.get("https://www.google.com");
                    test.log(Status.INFO, "Navigated to Google homepage");
            
                    String actualTitle = driver.getTitle();
                    test.log(Status.INFO, "Fetched title: " + actualTitle);
            
                    try {
                        Assert.assertEquals(actualTitle, "Google123"); // Intentionally incorrect title
                        test.pass("Title validation passed");
                    } catch (AssertionError e) {
                        String screenshotPath = captureScreenshot("GoogleTitleTest");
                        test.fail("Title validation failed").addScreenCaptureFromPath(screenshotPath);
                    }
                }
            
                public String captureScreenshot(String screenshotName) {
                    // Format the timestamp for the screenshot name
                    String timestamp = new SimpleDateFormat("yyyyMMddHHmmss").format(new Date());
                    String filePath = "screenshots/" + screenshotName + "_" + timestamp + ".png";
            
                    // Capture the screenshot and save it
                    File srcFile = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE);
                    try {
                        FileUtils.copyFile(srcFile, new File(filePath));
                        System.out.println("Screenshot saved: " + filePath);
                    } catch (IOException e) {
                        System.out.println("Failed to save screenshot: " + e.getMessage());
                    }
                    return filePath;
                }
            
                @AfterClass
                public void tearDown() {
                    driver.quit();
                    extent.flush();
                }
            }
                

Explanation of the Code

  • Maven Dependencies: The Selenium and ExtentReports dependencies are required for browser automation and reporting.
  • Extent Report Configuration: ExtentHtmlReporter initializes the HTML report where the results will be logged.
  • Test Case with Screenshot:
    • The test navigates to the Google homepage and fetches its title.
    • If the title validation fails, the captureScreenshot() method captures a screenshot and saves it in the screenshots folder with a timestamp.
    • The screenshot is then attached to the report using addScreenCaptureFromPath().
  • Screenshot Capture Logic:
    • The TakesScreenshot interface is used to capture the screenshot.
    • The screenshot is saved in the screenshots/ directory with a timestamp to avoid overwriting.
  • Report Generation: The report is flushed using extent.flush() at the end of the test execution to write all logs and screenshots into the HTML report.

Real-Time Scenario: When to Use Screenshots?

  • Test Failures: Capture screenshots to visualize what went wrong during the test execution.
  • UI Validation: Validate the appearance of UI elements across browsers and platforms.
  • Automation Debugging: Use screenshots to debug failed test steps during automation runs.
  • Continuous Integration: Attach screenshots to reports generated in Jenkins, GitLab CI/CD, or other CI tools for easier issue tracking.

Folder Structure for Screenshots and Report

  • test-output/ExtentScreenshotReport.html: The HTML report with screenshots and logs.
  • screenshots/: Directory containing screenshots with timestamps.

How to Run the Program

  1. Install the latest version of ChromeDriver and add it to your system PATH.
  2. Run the test using your IDE or mvn test if you are using Maven.
  3. Check the test-output/ExtentScreenshotReport.html report for the attached screenshots.

Allure Reports in Testing

Allure Reports is another popular and visually appealing reporting tool, widely used in test automation frameworks to produce detailed and user-friendly test execution reports. It integrates seamlessly with many testing frameworks such as TestNG, JUnit, and Cucumber, and provides a clean, structured report that includes logs, screenshots, test descriptions, and categories of test results.

Key Features of Allure Reports

  • Supports attaching rich media such as screenshots, logs, and videos to test results.
  • Allows categorization of test results, such as passed, failed, broken, and skipped.
  • Provides comprehensive overviews of test execution, including statistics and trends.
  • Supports behavior-driven development (BDD) tools like Cucumber, offering reports in Gherkin-style format.
  • Customizable dashboards to visualize key metrics.

Why Allure Reports is Important

Allure Reports are crucial for providing structured, detailed, and intuitive reports that help testers and developers quickly identify issues. Its ability to categorize results and present them in an organized, graphical format makes it ideal for test automation projects requiring traceability and transparency.

When to Use Allure Reports:

  • When working with BDD frameworks like Cucumber to generate Gherkin-style reports.
  • When you need highly customizable, graphical reports with metrics and trends.
  • When running complex test suites that require detailed insights into the testing process.

Real-Time Scenario:

Suppose you're testing a healthcare management system with multiple modules such as patient registration, doctor scheduling, and billing. Allure Reports can help generate categorized test results for each module:

  • Patient Registration: Passed (5 test cases)
  • Doctor Scheduling: Failed (1 test case - Issue: Date selection error)
  • Billing Module: Passed (3 test cases)
  • Logs and screenshots attached for the failed test case.
These detailed insights help both developers and business stakeholders understand which modules passed or failed, with direct links to the logs and screenshots for failed scenarios.

How to Write an Allure Report

Allure Reports are typically generated by adding specific annotations and logging mechanisms in your test scripts. Below are the steps to integrate Allure Reports:

  1. Add Allure dependencies to your project (e.g., in Maven or Gradle).
  2. Annotate test cases with Allure annotations such as @Step, @Attachment, and @Feature.
  3. Generate and attach logs, screenshots, or other media to the test cases.
  4. After test execution, use Allure command-line tools to generate the report from the results.

Allure Report in Selenium Testing

Allure Reports are powerful tools that provide visually appealing and detailed test reports. They help testers and developers analyze and track test results effectively, integrating features like timelines, logs, and screenshots.

Required Maven Dependencies

Add the following dependencies in your pom.xml to integrate Selenium with Allure Reports:

<dependencies>
                <!-- Selenium Java -->
                <dependency>
                    <groupId>org.seleniumhq.selenium</groupId>
                    <artifactId>selenium-java</artifactId>
                    <version>4.12.1</version>
                </dependency>
            
                <!-- Allure TestNG -->
                <dependency>
                    <groupId>io.qameta.allure</groupId>
                    <artifactId>allure-testng</artifactId>
                    <version>2.20.1</version>
                </dependency>
            </dependencies>
                

Complete Java Code for Selenium with Allure Reports

import org.openqa.selenium.WebDriver;
            import org.openqa.selenium.chrome.ChromeDriver;
            import org.testng.Assert;
            import org.testng.annotations.AfterClass;
            import org.testng.annotations.BeforeClass;
            import org.testng.annotations.Test;
            import io.qameta.allure.Allure;
            import io.qameta.allure.Description;
            import io.qameta.allure.Step;
            import io.qameta.allure.Attachment;
            
            public class AllureReportExample {
            
                WebDriver driver;
            
                @BeforeClass
                public void setup() {
                    driver = new ChromeDriver(); // Ensure ChromeDriver is set in PATH
                    Allure.addAttachment("Browser", "Chrome");
                }
            
                @Test(description = "Validate Google Title")
                @Description("This test validates the title of the Google homepage.")
                public void testGoogleTitle() {
                    openGoogle();
                    String title = driver.getTitle();
                    logInfo("Fetched title: " + title);
            
                    try {
                        Assert.assertEquals(title, "Google");
                        logInfo("Title validation passed.");
                    } catch (AssertionError e) {
                        captureScreenshot();
                        logInfo("Title validation failed: " + e.getMessage());
                        throw e;
                    }
                }
            
                @Step("Open Google Homepage")
                public void openGoogle() {
                    driver.get("https://www.google.com");
                }
            
                @Attachment(value = "Screenshot", type = "image/png")
                public byte[] captureScreenshot() {
                    // Code to capture a screenshot (skipped for brevity)
                    return new byte[0];
                }
            
                @Step("Log Information: {0}")
                public void logInfo(String message) {
                    Allure.addAttachment("Info", message);
                }
            
                @AfterClass
                public void tearDown() {
                    driver.quit();
                    Allure.addAttachment("Browser closed", "Chrome");
                }
            }
                

Explanation of the Code

  • Maven Dependencies: Selenium and Allure TestNG are required for this setup.
  • Allure Annotations: @Description and @Step annotations enhance the readability of the report.
  • Test Flow: A test is written to validate the Google homepage title.
  • Logging and Attachments: The test logs information and attaches screenshots if the test fails.
  • Teardown: The driver quits, and a final message is logged in the report.

How to Generate Allure Report

  1. Install **Allure** on your machine:
    brew install allure  # For macOS
                choco install allure  # For Windows
  2. Run your tests with mvn test.
  3. Generate the report by executing:
    allure serve target/allure-results

Why Use Allure Reports?

  • Allure provides **graphical timelines** and **detailed insights** for each test step.
  • Integrates easily with **Selenium**, **TestNG**, **JUnit**, and other frameworks.
  • Supports adding **screenshots, logs, and attachments** to enhance the report.
  • Generates **real-time HTML reports** viewable in any browser.

Folder Structure of Generated Report

The report folder will have the following structure:

  • allure-results: Directory where test results are saved.
  • allure-report: Directory containing the generated HTML report (after running the allure serve command).