ElevenLabs

Set up and create transcripts using ElevenLabs as a speech-to-text provider for post-meeting and real-time transcription.

You can use ElevenLabs as a speech-to-text provider for transcription. This guide explains how to:

  • Set up ElevenLabs as a speech-to-text provider and configure ElevenLabs for post-meeting and real-time transcription
  • Learn how to configure ElevenLabs for multilingual transcription and access custom ElevenLabs-specific fields on the transcript data (available when not using Perfect Diarization or Hybrid Diarization)
  • Covers FAQs

You can use this sample app to compare transcription quality and output across multiple third-party providers using Recall.ai's async transcription API.


Setup

Before using ElevenLabs with Recall.ai, you must create an ElevenLabs API key and add it to the Recall.ai dashboard.

Create an ElevenLabs API key

Create a new API key in the ElevenLabs API keys page.

Important: Make sure the API key has access to ElevenLabs Speech-to-Text.

Add your ElevenLabs API key to the Recall.ai transcription dashboard

Add your ElevenLabs API key in the Recall.ai dashboard for the Recall region where you will create recordings:

❗️

When adding your ElevenLabs API key in the dashboard, make sure you set Data Residency correctly.

If the data residency setting does not match your ElevenLabs account, transcription requests may fail with the following error:

{"message_type":"auth_error","error":"You must be authenticated to use this endpoint."}

By default, ElevenLabs only supports US data residency unless you are on an ElevenLabs enterprise plan. See ElevenLabs data residency for more details.


Quickstart

Supported transcription workflows

WorkflowSupported
Post-meeting transcription✅ Yes
Real-time transcription with meeting bots✅ Yes
Real-time transcription with Desktop Recording SDK✅ Yes

ElevenLabs for post-meeting transcription

Use post-meeting transcription when you want to transcribe a recording after the recording has completed.

To use ElevenLabs for post-meeting transcription, call the Create Async Transcript endpoint and set provider to elevenlabs_async along with your preferred ElevenLabs transcription options:

curl --request POST \
     --url https://RECALL_REGION.recall.ai/api/v1/recording/RECORDING_ID/create_transcript/ \
     --header "Authorization: RECALL_API_KEY" \
     --header "accept: application/json" \
     --header "content-type: application/json" \
     --data '
{
  "provider": {
    "elevenlabs_async": {
      "model_id": "scribe_v2"
    }
  }
}
'

See the provider.elevenlabs_async field on the Create Async Transcript endpoint for the full list of options available.

📘

See post-meeting transcription for the full post-meeting transcription implementation, including when to create the post-meeting transcript, which webhook events to listen for, how to retrieve the completed transcript, how to handle transcription failures, and more.

ElevenLabs for real-time transcription

To use ElevenLabs for real-time transcription, set the provider to elevenlabs_streaming when creating the recording. Real-time transcript delivery is configured separately through recording_config.realtime_endpoints.

Meeting bots

To use ElevenLabs real-time transcription with meeting bots, set recording_config.transcript.provider.elevenlabs_streaming in the Create Bot request along with your preferred ElevenLabs transcription options:

curl --request POST \
     --url https://RECALL_REGION.recall.ai/api/v1/bot/ \
     --header "Authorization: RECALL_API_KEY" \
     --header "accept: application/json" \
     --header "content-type: application/json" \
     --data '
{
  "meeting_url": "MEETING_URL",
  "recording_config": {
    "transcript": {
      "provider": {
        "elevenlabs_streaming": {
          "model_id": "scribe_v2_realtime"
        }
      }
    },
    "realtime_endpoints": [
      {
        "url": "REAL_TIME_TRANSCRIPT_WEBHOOK_ENDPOINT",
        "type": "webhook",
        "events": ["transcript.data"]
      }
    ]
  }
}
'

See the recording_config.transcript.provider.elevenlabs_streaming field on the Create Bot endpoint for the full list of options available.

📘

See Meeting Bot Real-time Transcription for the full meeting bot implementation, including how to create a bot with real-time transcription enabled, configure real-time transcript delivery, subscribe to transcript events, and receive transcript data from a live meeting.

Desktop Recording SDK

To use ElevenLabs real-time transcription with the Desktop Recording SDK, set recording_config.transcript.provider.elevenlabs_streaming in the Create Desktop SDK Upload request along with your preferred ElevenLabs transcription options:

curl --request POST \
     --url https://RECALL_REGION.recall.ai/api/v1/sdk_upload/ \
     --header "Authorization: RECALL_API_KEY" \
     --header "accept: application/json" \
     --header "content-type: application/json" \
     --data '
{
  "recording_config": {
    "transcript": {
      "provider": {
        "elevenlabs_streaming": {
          "model_id": "scribe_v2_realtime"
        }
      }
    },
    "realtime_endpoints": [
      {
        "url": "REAL_TIME_TRANSCRIPT_WEBHOOK_ENDPOINT",
        "type": "webhook",
        "events": ["transcript.data"]
      }
    ]
  }
}
'

See the recording_config.transcript.provider.elevenlabs_streaming field on the Create Desktop SDK Upload endpoint for the full list of options available.

📘

See Desktop Recording SDK Real-time Transcription for the full Desktop Recording SDK implementation, including how to create a Desktop SDK upload with real-time transcription enabled, configure transcript delivery, start recording from the SDK, and receive transcript data from the recording.


Additional configurations

Multilingual transcription with ElevenLabs

In some meetings, participants may speak in more than one language, and you may not know ahead of time which language or languages will be used. In those cases, you can use a transcription provider that supports multilingual transcription. Multilingual transcription generally includes two distinct features:

FeatureDescription
Language detectionDetects the spoken language without requiring you to set a language explicitly.
Code-switchingHandles conversations where speakers switch between two or more languages during the same meeting.

Multilingual transcription options for post-meeting transcription

ElevenLabs post-meeting transcription supports both language detection and code-switching. Below is an example of how to configure multilingual transcription with ElevenLabs using the Create Async Transcript endpoint:

{
  // ... other Create Async Transcript request options
  "provider": {
    "elevenlabs_async": {
      "model_id": "scribe_v2"
      // If `language_code` is unset, the model will detect the language automatically
    }
  }
}

Multilingual transcription options for real-time transcription

ElevenLabs real-time transcription supports both language detection and code-switching. Below is an example of how to configure multilingual transcription with ElevenLabs using either the Create Bot or Create Desktop SDK Uploadendpoints:

{
  // ... other request options
  "recording_config": {
    // ... other recording options
    "transcript": {
      // ... other transcript options
      "provider": {
        "elevenlabs_streaming": {
          "model_id": "scribe_v2_realtime"
          // If `language_code` is unset, the model will detect the language automatically
        }
      }
    }
  }
}

Accessing provider-specific fields from ElevenLabs transcript data

If you need ElevenLabs-specific fields that are not exposed in the normalized Recall transcript, you can access the provider data.

❗️

Provider data is not available when using Perfect Diarization or Hybrid Diarization

Accessing ElevenLabs provider data post-meeting

For post-meeting transcription, you can access the raw ElevenLabs transcription response from the completed transcript artifact. To access provider data after the meeting, fetch the transcript artifact using the Retrieve Transcript endpoint. Use the data.provider_data_download_url field from the response to download the raw provider response.

The response returned by provider_data_download_url varies by provider. See the accessing provider data section of the post-meeting transcription guide for implementation details.

Accessing ElevenLabs provider data in real time

For real-time transcription, subscribe to transcript.provider_data in your real-time endpoint configuration if you need ElevenLabs-specific payloads:

{
  "url": "REAL_TIME_TRANSCRIPT_WEBHOOK_ENDPOINT",
  "type": "webhook",
  "events": ["transcript.data", "transcript.provider_data"]
}

See the accessing provider data section of the real-time transcription docs for implementation details.


FAQ

Do I need to add an ElevenLabs API key in every Recall.ai region?

Yes. Recall.ai regions are isolated, so you must add your ElevenLabs API key in each Recall.ai region where you want to use ElevenLabs transcription.

What regions can I configure with ElevenLabs?

You can configure ElevenLabs to use any of US, EU & IN regions. See ElevenLabs for details about their EU region support.