Turn your manual testers into automation experts! Request a DemoStart testRigor Free

AI at testRigor

At testRigor, we recognize the transformative potential of AI in software testing. As we integrate AI into our platform, our priority remains clear: ensure that AI systems and applications are used, developed, deployed, and managed in a manner that upholds transparency, fairness, accountability, and privacy standards.

This document provides a concise overview of how we address key concerns related to AI usage in test automation. It is designed to help customers quickly understand our approach, demonstrating our commitment to privacy, reliability, and transparency.

We leverage advanced AI models that align with industry best practices for security and privacy, ensuring that our technology is both cutting-edge and compliant with best-in-class security and privacy standards.

Each section below introduces a core aspect of AI usage within testRigor, followed by frequently asked questions (FAQs) to provide clarity on our practices.

Training & AI Model Usage

AI-driven test automation offers significant efficiency and accuracy improvements, but ensuring data security, reliability, and human oversight is critical. Below, we address some of the most common questions regarding how testRigor uses AI models while maintaining customer trust and privacy.

1. Does testRigor train its AI models on customer test data or any inputs we provide?

We never use data from our paying customers for training.

This commitment is aligned with the terms and privacy policies of the AI providers we work with, ensuring that customer data remains private. Instead, testRigor relies on it´s own models and existing AI models from leading AI research and development companies or open-source alternatives.

2. How does testRigor ensure that AI-generated test scripts remain reliable and free of errors or hallucinations?

We implement multiple mechanisms to ensure the accuracy and reliability of AI-generated test scripts, including:

  • Continuous performance monitoring to track AI behavior and detect inconsistencies.
  • Human oversight and validation to review AI-generated outputs.
  • User feedback mechanisms to refine and improve AI performance.
  • Robust error-handling to detect and flag potential anomalies in test scripts.

By combining AI automation with rigorous quality control measures, we minimize errors and maintain test reliability.

3. Can customers fine-tune or customize AI behavior for their specific testing needs without exposing proprietary data?

Yes, testRigor enables customers to customize AI behavior without risking proprietary data exposure. TestRigor uses existing LLM models and fine-tunes them using public data, not customer data. Customers can customize inputs to guide and improve AI results for their specific testing requirements.

4. How does testRigor balance AI automation with human oversight to prevent false positives or flaky tests?

AI automation is powerful, but human validation remains essential. testRigor employs a hybrid approach that ensures accuracy by:

  • Leveraging self-healing capabilities that allow tests to adapt to minor UI changes.
  • Providing clear reporting with execution logs, videos, and screenshots.
  • Enabling user feedback mechanisms to flag and correct inconsistencies.
  • Allowing manual intervention where necessary to refine AI-driven test cases.

Data Privacy & Security

We implement industry-leading security measures to protect sensitive information while maintaining transparency in our AI interactions.These measures align with industry best practices and regulatory standards, ensuring strong protection for customer data.

1. What specific measures does testRigor take to protect sensitive test data and credentials?

testRigor employs multiple layers of security to safeguard sensitive data, including:

  • Encryption: All data is encrypted both in transit (TLS 1.3) and at rest (FIPS 140-2 compliant AES-256).
  • Role-Based Access Control (RBAC): Ensures that only authorized users have access to specific data and functionalities.
  • Multi-Factor Authentication (MFA): Provides an additional layer of security for user authentication.
  • Regular Security Assessments: Includes vulnerability scans, penetration testing, and compliance audits to proactively identify and mitigate risks.

2. Is any test data stored or logged by testRigor or its AI providers? If so, for how long and under what conditions?

No, testRigor does not store or log test data. Our platform interacts with AI models via APIs, and while prompts and responses may appear in execution logs during active test sessions, they are not permanently stored.

Additionally, we provide customers with the flexibility to disable generative AI features at the company level. This ensures that organizations have full control over AI-assisted test automation.

3. Do you have full control over data retention?

Customers have full control over data retention through clearly defined policies and contractual agreements. We provide data deletion options as per the terms agreed upon between testRigor and each client, ensuring compliance with internal security policies and external regulatory frameworks.

Data Usage & Third-Party Dependencies

Our approach to third-party AI providers prioritizes transparency and customer control over data usage. Below, we address key concerns regarding AI interactions and external dependencies.

1. Can we self-host or fully isolate our AI interactions to avoid external data processing?

testRigor does not currently offer self-hosting or full isolation of AI interactions; however, we implement stringent security controls, including encryption and access restrictions, to ensure data protection. Our platform utilizes advanced AI models that align with industry best practices for security and privacy, with data processing occurring externally via API-based interactions.

However, we emphasize that only test-related data is processed, and we enforce security controls to protect sensitive information. Customers also have the option to disable AI-driven features at the company level, providing flexibility based on security and compliance requirements.

2. In the event of a security breach involving an AI provider, how does testRigor mitigate potential data exposure risks for its customers?

testRigor does not share customer data with AI providers or use it for AI learning purposes. Our platform interacts with AI models via API without exposing customer-specific data beyond the test execution process.

To further mitigate risks, testRigor implements:

  • End-to-end encryption (TLS 1.3 in transit, AES-256 at rest) to protect all AI interactions.
  • Access controls and monitoring.
  • Regular security assessments and compliance measures to align with best-in-class security standards and regulations (ISO27001, SOC2 and HIPAA).

AI Decision Transparency & Explainability

testRigor implements rigorous validation processes to ensure that AI-generated decisions are accurate, explainable, and free from bias. Below, we address key questions regarding AI decision-making and oversight.

1. How can we verify that testRigor’s AI decisions are accurate and not biased toward incorrect test logic?

testRigor employs multiple strategies to validate AI decisions and mitigate bias, including:

  • Diverse and representative data sources to reduce the risk of model bias.
  • Regular audits models to assess for accuracy and consistency.
  • Continuous learning and model updates to enhance decision quality over time.
  • Human oversight to validate AI-generated test scripts.

2. Can we audit or review AI-generated test scripts before they are executed in our environment?

Yes, testRigor allows full auditing and review of AI-generated test scripts before execution.

Customers have access to:

  • Pre-execution previews, where they can inspect and modify AI-generated test logic.
  • User interaction mechanisms to refine scripts and ensure accuracy before deployment.
  • Manual intervention options, allowing users to override AI-generated steps as needed.

Performance & Reliability

testRigor is designed to adapt seamlessly, ensuring minimal maintenance and high test reliability over time. By leveraging self-healing functionality, testRigor minimizes maintenance efforts, ensuring that automated tests remain stable and reliable—even as the application undergoes frequent updates.

1. How does testRigor’s AI adapt to UI changes, dynamic elements, or non-deterministic test environments?

testRigor utilizes AI-driven object recognition and self-healing capabilities to automatically adjust test scripts as applications evolve. This approach allows the system to:

  • Detect UI modifications and update test steps accordingly.
  • Adapt to dynamic elements without requiring manual intervention.

Customization & Control

testRigor provides flexibility for organizations to tailor AI-powered test automation to their specific needs. Whether leveraging AI-driven test creation or preferring traditional scripting, customers have full control over feature usage.

1. Can we disable specific AI features, such as test generation from natural language, if we prefer traditional scripting?

Yes, testRigor allows customers to disable specific AI features, including test generation from natural language, if they prefer traditional scripting.

  • Organizations can fully disable generative AI capabilities at the company level, ensuring compliance with internal policies and security preferences.

Users retain control over the testing approach, allowing them to choose between AI-assisted and manual scripting based on their requirements.

On our website, we utilize cookies to ensure that your browsing experience is tailored to your preferences and needs. By clicking "Accept," you agree to the use of all cookies. Learn more.
Cookie settings
Privacy Overview
This site utilizes cookies to enhance your browsing experience. Among these, essential cookies are stored on your browser as they are necessary for ...
Read more
Strictly Necessary CookiesAlways Enabled
Essential cookies are crucial for the proper functioning and security of the website.
Non-NecessaryEnabled
Cookies that are not essential for the website's functionality but are employed to gather additional data. You can choose to opt out by using this toggle switch. These cookies gather data for analytics and performance tracking purposes.