Skip to content
Last updated

Overview

Test accounts are a crucial part of the Checkr Trust API integration process. They allow you to validate your integration without making calls to production data sources. This guide explains how test accounts work and provides details about the available test data.

Test accounts are designated with the stage "test" and are designed to:

  • Validate your integration code works correctly
  • Understand the API response formats
  • Test error handling scenarios
  • Familiarize yourself with the product workflow

Getting Access

Please contact your account representative or support@checkrtrust.com for additional questions.

Available Products

Test accounts currently support:

Other products are not available in test mode and attempting to enable them for a test account will not be allowed.

Product Behavior and Error Handling

  1. Using Test Data

    • When making requests with PII that matches our test profiles, you'll receive predefined mock responses
    • When making requests with PII that doesn't match test profiles, you'll receive a successful response (HTTP 200) with empty results
    • This allows you to test both your success and "no results found" handling
    • Important: For most test-mode mocks, only the first and last name are used to match test profiles. Other PII fields (DOB, SSN, etc.) are ignored for mock selection.
      • Exception: Driver Checks (driver_license_status, motor_vehicle_report) use deterministic mock selection based on check_type and driver_license_number (and still enforce normal input validation, e.g. required fields and state-specific requirements).
  2. Product Authorization

    • Each test account must have products explicitly enabled via product configurations
    • Attempting to use a product that isn't enabled will result in a 401 Unauthorized response

Test Data Sets

The test environment provides a set of mock profiles that return predefined results. Each mock profile is designed to demonstrate specific scenarios you might encounter in production.

Available Test Profiles (Checks)

Here are the test profiles available for instant criminal and sex offender registry checks:

  1. Marcus Williams

    • Results: Multiple traffic violations and a domestic violence charge with match confidence level
  2. Sarah Martinez

    • Results: Multiple traffic violations across different states
  3. David Thompson

    • Results: Multiple sexual offense records and failure to register as sex offender
  4. Lisa Chen

    • Results: DUI and traffic violations
  5. Dolores Abernathy

    • Results: Multiple speeding violations
  6. Oliver Smith

    • Results: Sexual offense registry record
  7. Rebecca Foster

    • Results: Theft and traffic violations
  8. Miguel Santos

    • Results: Sex offender registry record
  9. Kimberly Park

    • Results: Sex offender registry record and related court / department of corrections
  10. Tyler Brooks

    • Results: Sex offender registry record and traffic violation
  11. Ava Jones

    • Results: Sexual offense records and failure to register as sex offender

Available Test Profiles (PII Validation)

PII Validation uses name-based mock selection. Use exact first and last names below (case-insensitive). If the name is not in this list, the API returns a successful response with very low match scores and a result context indicating the name cannot be resolved.

  1. Marcus Williams

    • Behavior: Full match.
  2. Sarah Martinez

    • Behavior: No match. All attributes have very low correlation.
  3. David Thompson

    • Behavior: Strong match on name, email, SSN, and ZIP; weak on DOB/address/phone.

Available Test Profiles (Document Verification)

Document Verification uses name-based mock selection (case-insensitive). For test accounts, the API returns a shared cached collection_link and completes results asynchronously via a delayed job; the Socure collection session is not used to determine the final results.

  1. Marcus Williams

    • Behavior: Accept.
  2. Sarah Martinez

    • Behavior: Reject.
  3. David Thompson

    • Behavior: Failed (mock provider error).

Using Test Data

  1. Use the exact names provided above
  • first and last name are the only criteria that determine which mock profile is used. Providing other details such as dob, middle name, phone, email, etc do not effect the results returned.
  1. Names are case-insensitive
  2. For Checks endpoints, PII combinations not listed above will return empty results with a 200 OK status; for PII Validation, non-matching names return a successful response with all attribute scores 0 and an CT-0523 result context.
  3. Test accounts cannot access production data

PII Validation in Test Accounts

  • Synchronous results: pii_validation returns results immediately in the response.
  • Matching rules: Only first_name and last_name determine which mock response is returned.
  • Unknown names: Returns a successful response with minimal attribute_match_scores and a result_context entry for CT-0523 ("Name cannot be resolved to the individual").
  • Result fields: Results include attribute_match_scores per attribute (0 or 100), an overall_match_score (0, 50, or 100), and a result_context array of objects derived from provider reason codes (each item includes code, title, and category).

Example: Create PII Validation (from PII)

Request:

{
  "first_name": "Marcus",
  "last_name": "Williams",
  "idv_type": "pii_validation",
  "email": "test@example.com"
}

Response (201):

{
  "id": "<uuid>",
  "idv_type": "pii_validation",
  "results": {
    "attribute_match_scores": {
      "first_name": 100,
      "last_name": 100,
      "dob": 100,
      "phone": 100,
      "email": 100,
      "address": 100,
      "city": 100,
      "state": 100,
      "zip_code": 100,
      "ssn": 100
    },
    "overall_match_score": 100,
    "result_context": [
      {
        "code": "<CT-code>",
        "title": "<human-readable title>",
        "category": "informational"
      }
    ]
  }
}

Example: Create PII Validation (unknown name)

Request:

{
  "first_name": "Unknown",
  "last_name": "User",
  "idv_type": "pii_validation"
}

Response (201):

{
  "id": "<uuid>",
  "idv_type": "pii_validation",
  "results": {
    "attribute_match_scores": {
      "first_name": 0,
      "last_name": 0,
      "dob": 0,
      "phone": 0,
      "email": 0,
      "address": 0,
      "city": 0,
      "state": 0,
      "zip_code": 0,
      "ssn": 0
    },
    "overall_match_score": 0,
    "result_context": [
      {
        "code": "CT-0523",
        "title": "Name cannot be resolved to the individual",
        "category": "rejection"
      }
    ]
  }
}

Testing Error Scenarios

When integrating with the API, it's important to test how your application handles various error conditions. Here are key scenarios you should test:

1. Input Validation Errors (400 Bad Request)

Test these scenarios to ensure your application handles validation errors gracefully:

  • Invalid Date Format

    {
      "first_name": "Marcus",
      "last_name": "Williams",
      "dob": "01-15-1988" // Invalid format, should be YYYYMMDD
    }
  • Missing Required Fields

    {
      "first_name": "Marcus"
      // Missing required last_name field
    }
  • Invalid Field Format

    {
      "first_name": "Marcus",
      "last_name": "Williams",
      "ssn": "123456789" // Invalid format, should be XXX-XX-XXXX
    }

2. Authorization Errors (401 Unauthorized)

  • Attempting to use products not enabled for your account
  • Using expired or invalid access tokens
  • Using incorrect client credentials

3. Empty Results vs Errors

It's important to understand the difference between error responses and valid empty results:

  • Empty Results (200 OK)

    • Using valid PII that doesn't match any test profiles
    • The response will be successful with an empty results array
    {
      "results": []
    }
  • Error Response (400 Bad Request)

    • Using invalid PII formats or missing required fields
    • The response will include specific error details
    {
      "errors": [
        {
          "code": "validation_error",
          "title": "Invalid date format",
          "source": {
            "pointer": "/dob"
          }
        }
      ]
    }

4. Product-Specific Testing

For each supported product (instant_criminal, sex_offender_registry, and pii_validation):

  1. Success Path

    • Use test profile names to get known results
    • Verify all response fields are correctly parsed
  2. Empty Results Path

    • Use valid but non-matching names
    • Verify your application handles empty results appropriately
  3. Error Path

    • Attempt to use unsupported products
    • Verify error handling for invalid input formats

Error Testing

  1. Implement Proper Error Handling

    The API follows the JSON:API error format. All errors will be returned in this standardized format:

    {
      "errors": [
        {
          "code": "validation_error",
          "title": "Invalid date format",
          "source": {
            "pointer": "/dob"
          }
        }
      ]
    }

    Common error scenarios you might encounter:

    // Invalid SSN Format
    {
      "errors": [
        {
          "code": "validation_error",
          "title": "Invalid SSN format",
          "source": {
            "pointer": "/ssn"
          }
        }
      ]
    }
    
    // Missing Required Field
    {
      "errors": [
        {
          "code": "validation_error",
          "title": "Last name is required",
          "source": {
            "pointer": "/last_name"
          }
        }
      ]
    }
    
    // Unauthorized Product Access
    {
      "errors": [
        {
          "code": "validation_error",
          "title": "The product requested is not enabled for this account",
          "source": {
            "pointer": "/check_type"
          }
        }
      ]
    }

    Your error handling should:

    • Parse the errors array to handle multiple errors
    • Use the code field for programmatic error handling
    • Display the title field for user-friendly error messages
    • Use the source.pointer to highlight specific form fields that need correction
  2. Validate All Response Fields

    • Check both successful and error responses
    • Verify your application correctly processes all fields
  3. Test Edge Cases

    • Very long names
    • test names with suffixes, prefixes, multiple parts of names
    • Special characters in names
    • Different date formats
    • Missing optional fields

Remember that test accounts are designed to help you validate your integration thoroughly before moving to production. Take advantage of these error scenarios to ensure your application handles all cases gracefully.

Other Scenarios to Test

  1. Test Error Handling:

    • Try requests with invalid PII to ensure your application handles empty results appropriately
    • Test unauthorized product access to ensure your error handling works
    • Validate your response parsing for both successful and empty results
  2. Validate Response Parsing:

    • Use different test profiles to ensure your code properly handles various response formats
    • Test both criminal records and sex offender registry responses
  3. Test All Workflows:

    • Exercise both successful and unsuccessful paths in your integration
    • Verify your handling of empty results (200 OK with no records)
    • Validate your error handling for unauthorized products (401)

Moving to Production

When you're ready to move to production:

  1. Work with your account representative to determine the appropriate account stage (pilot or live)
  2. Update your integration to use production credentials
  3. Begin testing with real PII data

Important Notes

  • Test accounts are free to use and do not incur charges
  • Test data is static and does not change
  • Test accounts cannot access production data
  • Some test profiles may be updated or added over time
  • Contact support if you need additional test scenarios

Driver Checks in Test Accounts

Driver checks are asynchronous. For test accounts, the provider webhook is simulated via a delayed job. Create a driver check as normal; results are delivered later via webhook and reflected on GET /v1/driver_checks/{id}.

MVR Mock Input → Output Scenarios

Use the following driver license numbers to trigger specific MVR scenarios:

  • DL0001: Clean record
    • License: VALID, no events
  • DL0002: Events present
    • License: VALID, includes a speeding violation
  • DL0003: Suspended
    • License: SUSPENDED, includes a suspension action

Notes:

  • The orderDetails.requestedAs.clientReferenceId is set to your driver check id for correlation.
  • For license numbers not listed above, the mock provider returns a NOT_FOUND result (no licenses or events).

DLSC Mock Input → Output Scenarios

Driver License Status Checks (DLSC) are also asynchronous. For test accounts, we simulate Tessera completion via webhook and you will receive a signed event of type:

  • driver_license_status.completed

Required fields (still validated in test mode)

Even though the mock result selection is deterministic, the API still enforces the same request validation rules as live driver checks (e.g. required fields and certain state-specific requirements). At minimum you must send:

  • check_type: "driver_license_status"
  • first_name, last_name, dob (YYYYMMDD)
  • driver_license_number, driver_license_state

Some states require additional fields:

  • requester_code is required for CA, PA, UT
  • zip_code is required for RI
  • ssn is required for WV and TX (if you do not provide it, the system will attempt to derive it; if it cannot, you’ll receive a validation error)

Deterministic scenarios (PII you can test)

Use these driver_license_number values (with check_type: "driver_license_status") to exercise specific outcomes:

  • A1234567: Passenger validity = VALID (non-expired)
  • 1234567: Passenger validity = INVALID (suspended)
  • 22038374: Passenger validity = INVALID (expired)
  • Any other license number: Passenger validity = NOT FOUND

Notes:

  • For DLSC, the high-level result is represented in the results.passenger.validity field (see the driver_check_driver_license_status_result schema).
  • Mock selection for DLSC is based on check_type + driver_license_number; other PII fields do not change which mock scenario you receive (but may be required for validation depending on driver_license_state).