Deepgram Language Detection: Supported Models and How to Use Them

September 9, 2025 (1mo ago)

Need to auto-detect a caller’s language before routing transcripts or running downstream NLP? Deepgram does support language detection, but only on specific models and only for pre-recorded audio. Here’s the reference I wish I had when wiring voice pipelines into n8n and Render.

Quick capabilities matrix

Model familyExample model IDsSupports detect_language?Streaming support?Notes
Nova-3nova-3, nova-3-general, nova-3-medical✅ Yes (pre-recorded only)❌ NoFlag detect_language=true. Streaming endpoints ignore it.
Nova-2nova-2, nova-2-general, nova-2-meeting, nova-2-phonecall, etc.✅ Yes (pre-recorded only)❌ NoWorks across domain-specific variants.
Nova (legacy)nova, nova-general, nova-phonecall, nova-medical✅ Yes (pre-recorded only)❌ NoStill supported for batch jobs.
Enhancedenhanced, enhanced-general, enhanced-meeting, enhanced-phonecall, etc.✅ Yes (pre-recorded only)❌ NoUse when you need higher accuracy but don’t require streaming.
Basebase, base-general, base-meeting, base-phonecall, etc.✅ Yes (pre-recorded only)❌ NoEntry-level models still handle detection for offline audio.
Whisper via Deepgramwhisper-base, whisper-large, etc.❌ No❌ NoWhisper uses its own detection. Deepgram’s detect_language flag is ignored and you can’t restrict language sets.

Important constraints

Batch transcription example

curl --request POST \
  --header "Authorization: Token $DEEPGRAM_API_KEY" \
  --header 'Content-Type: audio/wav' \
  --data-binary @youraudio.wav \
  --url 'https://api.deepgram.com/v1/listen?model=nova-3-general&detect_language=true'

Swap nova-3-general for any of the supported models in the table above. Deepgram returns a detected_language field in the response, allowing you to branch workflows on the fly.

Production checklist

Available model names (September 2025)

Use any of the following in the model query parameter:

When in doubt, query the Management API to confirm active names. Deepgram occasionally retires or renames variants.

Takeaways

  1. Detection + streaming isn’t a thing. Plan for batch if you need auto-language pivoting.
  2. Stick to Nova/Enhanced/Base for language detection. Whisper is transcription-only in this context.
  3. Parameterize everything. Keep model and detect_language in config so you can toggle per project.

Pair this guide with your n8n automations or Render services, and you’ll know exactly which Deepgram knob to turn before launching multi-lingual voice features.