"Test Reports wasn't configured correctly." with Junit XML file

I have been trying to get the Export test results to Test Reports add-on to work and haven’t had much success. I installed junit-xml to get the proper file type into my repository. The file is created in my expected location and the steps β€œpass” but the Test Reports add-on page says " Test Reports wasn’t configured correctly. Please make sure you configure it in the Workflow Editor. Get help for setting up Test Reports."

I am using a custom test script because we have custom logic that needs to be performed before the tests can run. We write out all test results files to $E2E_RESULTS_DIR and when I download the artifact I can see my xml file nested where I’ve told the steps to look.

Here is my related yml configurations:

    - script@1:
        title: Run E2E Tests
        inputs:
        - content: "#!/usr/bin/env bash\nset -eo pipefail\n\n# ─── Prereqs ────────────────────────────────────────────────────────────────\nif
            ! command -v jq >/dev/null; then\n  echo \"❌ 'jq' is required but not
            installed. Aborting.\"\n  exit 1\nfi\n\nif ! command -v allure >/dev/null;
            then\n  echo \"\U0001F527 Installing Allure CLI...\"\n  npm install -g
            allure-commandline --silent\nfi\n\necho -e \"\\n=== WebdriverIO Setup
            Check ===\"\nif [ -f \"e2e/wdio/wdio.browserstack.conf.ts\" ]; then\n
            \ echo \"βœ… Browserstack config exists\"\nelse\n  echo \"❌ Browserstack
            config not found\"\n  find . -name \"wdio*.ts\"\nfi\n\n# ─── Setup dirs
            & vars ──────────────────────────────────────────────────────\nmkdir -p
            test-logs\nRESULT_ROOT=\"$BITRISE_DEPLOY_DIR/e2e-results\"\nmkdir -p \"$RESULT_ROOT\"\n\n#
            ─── Setup suite names ──────────────────────────────────────────────────────\n#
            (If you want to run all three suites in one go)\nSUITES=(Profiles)\n\n#
            Build up \"--suite AccountManagement --suite Profiles --suite Feedback\"\nARGS=()\nfor
            s in \"${SUITES[@]}\"; do\n  ARGS+=(--suite \"$s\")\ndone\n\nexport RETRIES=3
            \  # WDIO’s config does `parseInt(process.env.RETRIES)`\n\nLOG_FILE=\"test-logs/android_${SUITE}.log\"\nRESULT_DIR=\"$RESULT_ROOT/android_${SUITE}_tests\"\nmkdir
            -p \"$RESULT_DIR\"\nrm -rf ./junit-results\n\n# ─── Check available BrowserStack
            threads ────────────────────────────────────\necho -e \"\\n=== Checking
            BrowserStack threads for suite: $SUITE ===\"\ncurl -u \"${BROWSERSTACK_USER}:${BROWSERSTACK_KEY}\"
            \\\n     -X GET \"https://api-cloud.browserstack.com/app-automate/plan.json\"
            \\\n     > threads.json\n\nTHREADS=$(jq '.parallel_sessions_running' threads.json
            || echo \"0\")\necho \"Current Browserstack Thread Count: $THREADS\"\n\nif
            [ \"$THREADS\" -gt 18 ]; then\n  echo \"❌ Too many BrowserStack threads
            in use ($THREADS > 18). Skipping suite: $SUITE\"\n  rm -f threads.json\n
            \ exit 0\nfi\n\nrm -f threads.json\n\n# ─── Run WebdriverIO for β€œProfiles”
            ─────────────────────────────────────────\necho -e \"\\nℹ️ Running WDIO
            with suite=\\\"$SUITE\\\" (RETRIES=$RETRIES)...\"\n\nset +e\nexport RUNTYPE=android\nyarn
            test:wdio:android \"${ARGS[@]}\" --logLevel info 2>&1 | tee \"$LOG_FILE\"\nWDIO_EXIT=${PIPESTATUS[0]}\nset
            -e\n\nif [ \"$WDIO_EXIT\" -eq 0 ]; then\n  echo \"βœ… Suite \\\"$SUITE\\\"
            passed\"\nelse\n  echo \"❌ Suite \\\"$SUITE\\\" failed\"\nfi\n\n# ───
            Save logs & JUnit XML ───────────────────────────────────────────────────\ncp
            \"$LOG_FILE\" \"$RESULT_DIR/full-log.txt\"\n\n# ─── Copy JUnit XMLs ────────────────────────────────────────────────────────\n#
            Any XML files generated by the JUnit reporter will live under ./junit-results\nif
            [ -d \"./junit-results\" ]; then\n  echo \"ℹ️ Copying JUnit XMLs β†’ $RESULT_DIR\"\n
            \ cp ./junit-results/*.xml \"$RESULT_DIR/\" || true\nelse\n  echo \"⚠️
            No ./junit-results directory found; skipping copy of JUnit XMLs\"\nfi\n\n#
            ─── Merge & Generate a Unified Allure Report ────────────────────────────────\n#
            WDIO’s onComplete hook should already have generated `./allure-report`
            for this suite,\n# but we consolidate raw JSON into a single folder (in
            case you want to merge multiple suite runs).\n\nCONS=\"$RESULT_ROOT/allure-results\"\nmkdir
            -p \"$CONS\"\n\n# Copy this suite’s raw JSON from the default `allure-results/`
            folder into CONS\nif [ -d \"./allure-results\" ]; then\n  cp -r \"./allure-results/\"*
            \"$CONS\"/ 2>/dev/null || true\nfi\n\n# Now generate a final HTML report
            if any JSON exists\nif find \"$CONS\" -name '*.json' | read; then\n  echo
            \"ℹ️ Generating consolidated Allure report from $CONS β†’ $RESULT_ROOT/allure-report\"\n
            \ allure generate \"$CONS\" --clean -o \"$RESULT_ROOT/allure-report\"\nelse\n
            \ echo \"⚠️ No raw Allure JSON in $CONS β€” skipping global report\"\nfi\n\n#
            ─── Expose & Exit ─────────────────────────────────────────────────────────\nenvman
            add --key E2E_RESULTS_DIR --value \"$RESULT_ROOT\"\necho \"\U0001F4E2
            E2E_RESULTS_DIR = $E2E_RESULTS_DIR\"\nexit \"$WDIO_EXIT\"\n"
    - deploy-to-bitrise-io@2:
        inputs:
        - deploy_path: "$E2E_RESULTS_DIR"
        - notify_user_groups: none
        - pipeline_intermediate_files: "$E2E_RESULTS_DIR:TEST_RESULTS_DIR"
        - is_compress: true
        title: Share Test Results
        is_always_run: true
    - custom-test-results-export@1:
        inputs:
        - test_name: Android E2E Tests
        - base_path: "$E2E_RESULTS_DIR"
        - search_pattern: "*/junit-results/*"
        - bitrise_test_result_dir: "$E2E_RESULTS_DIR"

Here is my wdio.conf file:

import type { Options } from "@wdio/types"
import * as allure from "allure-commandline"
import "dotenv/config"
import * as path from "path"
import { pixelBrowserstackOptions } from "./config/capabilities"
import { Browserstack } from "./helpers/browserstack"

// Set retries on failure to 0 unless env variable is set
const retries = parseInt(process.env.RETRIES)
const retryLimit = Number.isInteger(retries) ? retries : 0

export const config: Options.Testrunner = {
  autoCompileOpts: {
    autoCompile: true,
    tsNodeOpts: {
      transpileOnly: true,
      project: path.resolve(__dirname, "./tsconfig.json"),
    },
  },
  services: [],
  // To define and run all tests
  specs: [path.resolve(__dirname, "./specs/**/*.ts")],
  // To define and run tests by domain
  suites: {
    // UIM
    AccountManagement: [path.resolve(__dirname, "./specs/account-management/**/*.ts")],
    Feedback: [path.resolve(__dirname, "./specs/feedback/**/*.ts")],
    Profiles: [path.resolve(__dirname, "./specs/profiles/**/*.ts")],
    // POST
    Coordination: [path.resolve(__dirname, "./specs/coordination/**/*.ts")],
    PaymentRelease: [path.resolve(__dirname, "../specs/payment-release/**/*.ts")],
    // MATCH
    Booking: [path.resolve(__dirname, "./specs/match-making/booking/**/*.ts")],
    ListingSearch: [path.resolve(__dirname, "./specs/match-making/listing-search/**/*.ts")],
    PreBooking: [path.resolve(__dirname, "./specs/match-making/pre-booking/**/*.ts")],
    // SP
    MyShipments: [path.resolve(__dirname, "./specs/my-shipments/**/*.ts")],
  },
  maxInstances: 3,
  capabilities: [],

  // Level of logging verbosity: trace | debug | info | warn | error | silent
  logLevel: "error",
  waitforTimeout: 20000,
  connectionRetryTimeout: 120000,
  connectionRetryCount: 2,
  // The number of retry attempts for an entire specfile when it fails as a whole.
  specFileRetries: retryLimit,
  // Whether or not retried specfiles should be retried immediately or deferred to the end of the queue
  specFileRetriesDeferred: true,
  framework: "mocha",
  reporters: [
    ["spec", { showPreface: false, addConsoleLogs: true }],
    [
      "allure",
      {
        outputDir: "allure-results",
        disableWebdriverStepsReporting: true,
        disableWebdriverScreenshotsReporting: true,
        addConsoleLogs: true,
        reportedEnvironmentVars: { true: "true", false: "false" },
      },
    ],
    [
      "junit",
      {
        outputDir: "./junit-results",
        outputFileFormat: function (options: {
          cid: string
          capabilities: {
            browserName?: string
            "bstack:options"?: { deviceName?: string }
            [key: string]: any
          }
          specs: string[]
          results: { errors: number; failures: number; passed: number; skipped: number }
        }) {
          const bsOpts = options.capabilities["bstack:options"]
          const device = bsOpts?.deviceName || ""
          const nameParts = [options.cid]
          if (device) nameParts.push(device)

          return `junit-${nameParts.join("-")}.xml`
        },
      },
    ],
  ],
  // Options to be passed to Mocha. See the full list at http://mochajs.org/
  mochaOpts: {
    ui: "bdd",
    timeout: 400000,
  },
  // eslint-disable-next-line @typescript-eslint/no-unused-vars
  beforeSession(config, capabilities, specs, cid) {
    const isAndroid = capabilities["platformName"] === "android"
    const isWebview = specs[0].includes("webview")

    capabilities["appium:autoGrantPermissions"] = true
    if (isAndroid && isWebview) {
      capabilities["bstack:options"] = pixelBrowserstackOptions
    }

    if (specs && specs.some(spec => spec.includes("language-preferences.e2e.ts"))) {
      capabilities["appium:language"] = "es"
      capabilities["appium:locale"] = "ES"
    }
  },
  async afterTest(test, context, result) {
    if (result.passed === false)
      try {
        const testTitle = test.title
        const sessionId = driver.sessionId
        await Browserstack.getBrowserstackRecording(sessionId, testTitle)
      } catch (err) {
        console.error(err)
      }
  },
  onComplete: function () {
    const reportError = new Error("Could not generate Allure report")
    const generation = allure(["generate", "allure-results", "--clean"])
    return new Promise<void>(resolve => {
      /* Log error when allure reporting fails for any reason but do not block pipeline
      QA will still be alerting by Slack webhook when upload non-existant allure directory fails */
      const generationTimeout = setTimeout(() => {
        console.error("Error generating Allure report due to timeout:", reportError)
        resolve()
      }, 30000)

      try {
        generation.on("exit", function () {
          clearTimeout(generationTimeout)
          console.log("Allure report successfully generated")
          resolve()
        })
      } catch (error) {
        console.error("Error generating Allure report:", error)
        resolve()
      }
    })
  },
}

I’ve tried a few variations of the custom-test-results-export@1 step:

    - custom-test-results-export@1:
        inputs:
        - test_name: Android E2E Tests
        - base_path: "$E2E_RESULTS_DIR"
        - search_pattern: "*/junit-results/*"
        - bitrise_test_result_dir: "$E2E_RESULTS_DIR" 

or

    - custom-test-results-export@1:
        inputs:
        - test_name: Android E2E Tests
        - base_path: $E2E_RESULTS_DIR
        - search_pattern: '*'

but everything I’ve tried so far doesn’t appease the Test Report add-on.

Has anyone experienced this and/or know how to get things working right?