You can select which large language model you’d like to power Chat. At this time there’s no option to select the Autocomplete model as we’ve optimized a custom model for low latency specifically for this feature.

Mistral Large 2

Mistral’s newest and most capable LLM released on 7/24.

DeepSeek Coder V2 (7/24)

The current state-of-the-art open source model for coding.

Llama 3.1 405B

The first frontier-level open source AI model with 128k context length.

Llama 3.1 70B

The successor to Llama 3 70B.

Llama 3.1 8B

A small but powerful Llama model.

GPT-4o

OpenAI’s latest iteration of GPT-4 that exceeds GPT-4 Turbo and Claude 3 Opus on coding tasks.

GPT-4 Turbo

OpenAI’s original GPT-4 Turbo.

Claude 3.5 (Sonnet)

Anthropic’s newest and most capable LLM.

Claude 3 (Opus)

Anthropic’s original Opus model.

Llama 3 70B

Meta’s newest and most capable LLM.

DBRX Instruct

Databricks’s newest and most capable LLM.

To see a real time leaderboard of how models rank, we recommend looking the LMSYS leaderboard sorted by category ‘coding’. Double will always have the most capable model set as the default when you first install.

Coming Soon (Click here to get notified)

GPT-5

OpenAI’s anticipated successor to GPT-4 and most capable coding LLM, available for early access on Double later this year.

Selecting a Model

To change what model Double uses, go to the VS Code settings (Cmd + , or Ctrl + ,), expand the Extensions dropdown on the left side of the screen, and select Double. Here you’ll find a dropdown with all of the available models.