Published August 15, 2024 by Gonzalo

I recently saw a tweet from a fellow YC founder explaining that despite all of the AI coding tools, he still prefers to copy-paste code in and out of Claude’s web client, instead of relying on retrieval algorithms.

The sentiment was that assuming that the retrieval algorithms will do a good job at picking the right context often leads to disappointment. Meanwhile, manually selecting and trimming the context yields better results. This is probably because he knows the codebase and what context is relevant in this particular case better than anyone else.

Someone replied to the tweet and mentioned that selecting context is a form of Rubber Ducking, I can see how:

  1. Preparing snippets of code for an LLM to ingest requires you to organize and clarify your thoughts

  2. And similar to rubber ducking, you’re externalizing your thought process by presenting your code to an external entity

Both of which can help you spot issues or think of solutions simply through the process of preparing to explain.

Context Control with Double

Engineering the context inside and across context windows is a huge factor in getting an LLM to produce high quality code.

At Double, our mission is to make the best context management UX of any LLM product, prioritizing visibility. You should never have to wonder what parts of your codebase are being used, how they’re being selected and so on and so forth. visibility into how much space you’ve got in a context window and how cursor handles managing context across windows.

In the video below, I’ll walk you through Double’s current context management and how we’re working to make it even better.

If after reading this you have any feedback or questions, please reach me at help@double.bot. I personally read and reply to every email.

Was this page helpful?