AI Reports
ML-powered analytics including flaky tests, risk heatmaps, and predictive failure.
AI-Powered Reports
Tier 3 AI reports use machine learning algorithms to analyze your testing data and surface insights that go beyond what traditional metrics can reveal. These 11 reports are available in the Reports tab under the AI Insights category.
| Report | Chart Type | What It Does |
|---|---|---|
| Flaky Tests | Table | Identifies tests with inconsistent pass/fail outcomes across multiple runs |
| Release Readiness | Gauge | Computes a readiness score from pass rate, progress, and critical test pass rate |
| Risk Heatmap (Folder) | Heatmap | Visualizes failure risk across test case folders based on historical data |
| Risk Heatmap (Feature) | Heatmap | Visualizes failure risk across product features based on defect patterns |
| Test Case Quality | Table | Scores each test case on structure, clarity, and coverage quality |
| Predictive Failure | Table | Predicts which tests are most likely to fail based on historical patterns |
| Tester Effectiveness | Table | Measures tester performance: pass rate, defect discovery, throughput |
| Stale Tests | Table | Finds test cases that haven't been executed in a configurable timeframe |
| Cycle Health | Table | Dashboard showing progress, quality, and blockers for each test cycle |
| Suite Optimization | Table | Identifies redundant, unused, and always-passing tests to streamline your suite |
| Execution Velocity | Line | Tracks daily execution throughput with rolling average trend line |
AI Report Details
Flaky Tests
Flaky tests are tests that produce different results (pass/fail) across runs without any code changes. This report analyzes execution history across multiple cycles to identify tests with inconsistent outcomes. It shows a flakiness score (0-100) for each test, with higher scores indicating more inconsistency.
Use this report to: Prioritize stabilization efforts, quarantine unreliable tests, and improve CI/CD pipeline reliability.
Release Readiness
A composite score (gauge chart) that combines multiple factors to answer "Are we ready to release?" Factors include: overall pass rate, test execution completion percentage, critical test pass rate, and open blocker/critical defect count.
Use this report to: Make data-driven release decisions and communicate readiness to stakeholders.
Risk Heatmaps
Two variants (by Folder and by Feature) visualize which areas of your application carry the highest failure risk. The heatmap uses color intensity (green to red) to show relative risk levels based on failure rates, defect counts, and historical trends.
Use this report to: Focus testing effort on high-risk areas and allocate QA resources effectively.
Test Case Quality
Evaluates each test case against quality criteria including: presence of preconditions, number of test steps, clarity of expected results, proper use of fields (priority, type, labels), and linking to requirements. Each test case receives a quality score (0-100).
Use this report to: Identify poorly written test cases, drive test case improvement initiatives, and train new team members.
Predictive Failure
Uses historical execution data to predict which tests are most likely to fail in the next cycle. The model considers factors like: recent failure frequency, code area change rate, defect proximity, and historical flakiness.
Use this report to: Prioritize which tests to run first in time-constrained cycles and identify areas needing attention before they fail.
Suite Optimization
Analyzes your entire test suite to find optimization opportunities: tests that have never been executed, tests that always pass (potentially low value), duplicate or near-duplicate tests, and tests with overlapping coverage.
Use this report to: Reduce test suite maintenance burden, speed up execution time, and remove redundant tests.
AI reports are most accurate when you have at least 3-5 completed test cycles of historical data. The more execution history available, the better the AI models can identify patterns and make predictions.