Day 0 Support: We are committed to making new, state of the art models available to all users the same day they are released.

You can select which large language model you’d like to power Chat. At this time there’s no option to select the Autocomplete model as we’ve optimized a custom model for low latency specifically for this feature.

OpenAI o1-mini

OpenAI’s highest performance model for coding and math tasks. Despite the name, this model is both faster and stronger than o1-preview on coding and math tasks!

OpenAI o1-preview

OpenAI’s newest reasoning model designed to solve problems across generalist domains.

gpt-4o-2024-09-03

OpenAI’s newest GPT-4o checkpoint.

gpt-4o-2024-08-06

OpenAI’s 2024-08-06 checkpoint for GPT4o.

GPT-4 Turbo

OpenAI’s original GPT-4 Turbo.

DeepSeek V2.5

The current state-of-the-art open source model for coding.

Llama 3.1 405B

The first frontier-level open source AI model with 128k context length.

Llama 3.1 70B

The successor to Llama 3 70B.

Llama 3.1 8B

A small but powerful Llama model.

gpt-4o-2024-05-13

OpenAI’s canonical checkpoint for GPT4o

Claude 3.5 Sonnet (2024-10-22)

Anthropic’s newest and most capable LLM.

Claude 3 (Opus)

Anthropic’s original Opus model.

Mistral Large 2

Mistral’s newest and most capable LLM released on 7/24.

To see a real time leaderboard of how models rank, we recommend looking the LMSYS leaderboard sorted by category ‘coding’. Double will always have the most capable model set as the default when you first install.

Coming Soon (Click here to get notified)

GPT-5

OpenAI’s anticipated successor to GPT-4 and most capable coding LLM, available for early access on Double later this year.

Selecting a Model

To change what model Double uses, go to the VS Code settings (Cmd + , or Ctrl + ,), expand the Extensions dropdown on the left side of the screen, and select Double. Here you’ll find a dropdown with all of the available models.