You can select which large language model you’d like to power Chat. At this time there’s no option to select the Autocomplete model as we’ve optimized a custom model for low latency specifically for this feature.


OpenAI’s latest iteration of GPT-4 that exceeds GPT-4 Turbo and Claude 3 Opus on coding tasks.

GPT-4 Turbo

OpenAI’s original GPT-4 Turbo.

Claude 3.5 (Sonnet)

Anthropic’s newest and most capable LLM.

Claude 3 (Opus)

Anthropic’s original Opus model.

LlaMA-3 70B

Meta’s newest and most capable LLM.

DeepSeek Coder V2

The current state-of-the-art open source model for coding.

DBRX Instruct

Databricks’s newest and most capable LLM.

To see a real time leaderboard of how models rank, we recommend looking the LMSYS leaderboard sorted by category ‘coding’. Double will always have the most capable model set as the default when you first install.

Coming Soon (Click here to get notified)


OpenAI’s anticipated successor to GPT-4 and most capable coding LLM, available for early access on Double later this year.

Selecting a Model

To change what model Double uses, go to the VS Code settings (Cmd + , or Ctrl + ,), expand the Extensions dropdown on the left side of the screen, and select Double. Here you’ll find a dropdown with all of the available models.