How to Leverage LLMs to Document What You Learn
In today’s fast-paced software development landscape, innovative solutions and best practices often remain buried in scattered notes, hasty commits, and ad-hoc troubleshooting sessions.
Like many developers, I’ve struggled to capture the full breadth of my problem-solving process—from initial brainstorming to final solution. But I’ve discovered something transformative: by leveraging Large Language Models (LLMs) throughout development, I can not only build robust systems but also turn my raw ideas into clear, comprehensive documentation.
This documentation becomes a valuable learning resource, accelerating knowledge for you, your team or organization, or the wider community.
This blog post explores the process I’ve developed over recent months. Beyond just being a guide for using LLMs to write better code, it’s a call to action for you to develop your own process, document your insights, and share them with others. This creates a living repository of knowledge that can be shared, refined, and built upon continuously.
Overview of the LLM-Powered Workflow
The process is built on five core phases:
- Researching: Gathering and synthesizing data.
- Deciding: Evaluating alternatives and planning your approach.
- Building: Writing code with the assistance of AI tools.
- Iterating: Testing, debugging, and refining your solution.
- Documenting: Compiling all insights and your decisions into a clear, structured document.
To be clear, none of the above steps are unique. Obviously people like Harper are and have been doing most of those steps for a while now.
My key contribution here is encouraging people to complete the process with a documentation step that crystallizes their learnings into accessible handbooks that benefit everyone.
This workflow’s iterative nature transforms your documentation into a living document—each aspect of your problem-solving feeds back into the cycle, continuously enriching the knowledge base.
Below is a high-level diagram of this continuous process:
flowchart TD A[Researching] --> B[Deciding] B --> C[Building] C --> D[Iterating] D --> E[Documenting] E --> F[Shared Learning Resource] F --> A
Diagram: An iterative cycle where each phase reinforces and informs the next, culminating into a resource that benefits your entire community.
Researching with LLMs
The journey begins with research. At the start of every problem I’m solving (which could be a new feature or enhancement to a project), I capture all my initial thoughts and ideas—even if they seem vague or unstructured. Using an LLM as a research assistant allows me to ask targeted questions and receive concise, synthesized answers. Instead of manually scouring countless web pages, you can simply ask:
“What are the key differences between OAuth 2.0 and OpenID Connect for securing APIs? List pros, cons, and typical use cases.”
Best Practices for Research
- Be Specific: Focus your queries to get precise information.
- Iterate with Follow-Up Questions: Drill down to clarify and expand on initial responses.
- Verify Critical Information: Use the LLM’s output as a starting point and verify details against official documentation.
- Summarize Findings: Once you’ve gathered enough insights, ask the LLM to summarize your research into a coherent document. This summary becomes the backbone for later phases.
Deciding: Planning and Designing Your Solution
With your research in hand, the next step is to make informed decisions. Utilize the LLM as a tool in your toolbox to weigh options, evaluate trade-offs, and draft a high-level implementation plan. For example, if you’re deciding between WebSockets and HTTP polling for real-time updates, prompt the LLM to compare the options based on your requirements.
“Compare WebSockets and Server Sent Events for a high-traffic chat application in terms of latency, scalability, and implementation complexity.”
Best Practices for Deciding
- Provide Detailed Context: Outline your project requirements and constraints.
- Request Structured Outputs: Ask for bullet lists or tables to compare options clearly.
- Explore Alternatives: Don’t settle on the first answer—ask for additional approaches.
- Draft a Blueprint: Generate a high-level plan that will guide your coding efforts.
The output from this phase becomes your design blueprint—a document that informs all subsequent work.
Building with AI-Powered Coding Assistants
This is where the magic happens. Modern AI tools have revolutionized coding. While GitHub Copilot integrated into VSCode is fantastic, the ecosystem now includes specialized code editors like Cursor and Cline, innovative site designers like Vercel’s V0, and iterative development platforms like Claude Code. There’s even advanced tooling like Aider that can integrate multiple models for a richer coding experience.
This post was originally written in late Q1 2025. So depending on when you end up reading this, there will probably be 10 new products competing with each of the ones listed above and probably a bunch more tooling I can’t even conceptualize right now.
Editor’s Note: It’s the middle of April 2025 and many other options have emerged already: Continue 1.0, Abacus.AI, and OpenAI has released new models better at coding tasks.
How to Leverage AI in Coding
- Break Down Tasks: Instead of asking for an entire application, request small, manageable code snippets. Keep your projects small, and compose your projects of stand-alone modules which you can work on in isolation.
- Provide Context: Supply relevant code or project details so the LLM can generate accurate output.
- Iterate and Refine: Use AI-generated code as a draft. Test it, review it, and then ask follow-up questions.
- Explore Specialized Tools: Experiment with different platforms to find the ones that best fit your workflow.
- Have Robust Rules: Make your linters strict and if your language has optional type checking (Python), use it. Use TypeScript over JavaScript.
- Have Comprehensive Tests: Testing is more important than ever. Cover all eventualities. Luckily LLMs are actually really good at writing tests. You still have to watch them to keep them from cheating, but they are mostly repetitive.
Iterating: Testing, Debugging, and Refining Your Solution
No code works perfectly on the first try. Iteration is the heart of effective development. After building your solution, use LLMs to help debug and optimize. When you encounter errors or performance issues, prompt the LLM with the problem details and relevant code snippets.
The Iteration Loop
flowchart TD A[Write Code] --> B[Test Code] B --> C{Do tests pass?} C -- YES --> D[Deploy/Document] C -- NO --> E[Consult LLM for Debugging] E --> A
Diagram: The cycle of writing, testing, and debugging code with AI guidance.
Best Practices for Iteration
- Isolate Issues: Tackle one error or function or bottleneck at a time.
- Provide Context: Include relevant snippets and error logs in your prompts.
- Ask for Explanations: Request not just fixes but also reasoning behind suggestions.
- Retest After Changes: Verify that each fix resolves the issue without introducing new problems.
This loop of writing, testing, and refining ensures that your final solution is robust and efficient.
Documenting: Creating a Comprehensive Learning Resource
This is where everything crystallizes and helps you move forward.
The final phase is to compile everything—research, design decisions, code, and debugging insights—into a polished, comprehensive document. This isn’t just documentation; it’s a narrative of your entire problem-solving journey, a resource that others can learn from and build upon.
It is my personal belief that any documentation is better than no documentation, but really good documentation goes beyond explaining how a system works.
Really good documentation starts with explaining the problem that was being solved. Ideally it should also include what options were considered and why the winning approach was selected and why the others were rejected.
Excellent documentation will take you through the entire process, ending at the resultant solution and how it works. Bonus points if you tell me about similar projects, deeper resources on the concepts in the documentation, and other pointers in those veins.
The Documentation Process
flowchart TD A[Draft Documentation] --> B[LLM Review & Suggestions] B --> C[Developer Edits & Refinement] C --> D[Final, Polished Document]
Diagram: An iterative process where AI-generated drafts are refined by human oversight to produce the final documentation.
Best Practices for Documentation
- Generate Incrementally: Document each phase as you complete it.
- Use AI to Summarize: Let the LLM transform your raw notes into readable, structured text.
- Review and Edit Thoroughly: Ensure technical accuracy and clarity.
- Share Widely: Publish your document on your blog, internal wiki, or community forum, and invite feedback.
This final document becomes a case study—a rich resource that captures your reasoning, the trade-offs you considered, and the final solution. It accelerates learning for anyone who reads it, turning your journey into an asset for the entire community.
If you have access to models with “Deep Research,” you can also drop in your final blog post and have the LLM find associated resources, blog posts, interesting related topics—then update your post to include pointers to those places.
Learning from Harper’s LLM Codegen Workflow
I wasn’t the only one experimenting with these methods. My friend Harper has been building small products using LLMs and has shared his process in a detailed blog post, “My LLM Codegen Workflow (ATM)”. As he puts it:
“I have been building so many small products using LLMs. It has been fun, and useful. However, there are pitfalls that can waste so much time. A while back a friend asked me how I was using LLMs to write software. I thought ‘oh boy. how much time do you have!’ and thus this post.”
Harper’s workflow echoes the iterative, evolving nature of the process described here. He notes,
“This is working well NOW, it will probably not work in 2 weeks, or it will work twice as well. ¯\(ツ)/¯”
These quotes remind us that this process is dynamic—it evolves as the tools improve and as we learn more. I encourage you to read his post for further inspiration and to see how others are applying these techniques.
In Summary
The true power of this process lies in transforming a messy, unstructured journey into a clear, structured resource that accelerates learning. By using LLMs to research, decide, build, iterate, and document, you create a comprehensive narrative that helps you understand your solutions better while serving as an invaluable guide for others.
I challenge you to adopt this LLM-powered workflow in your own projects:
- Experiment: Integrate LLMs into every phase of your development process.
- Document: Turn your raw outputs into a polished blog post or technical document.
- Share: Publish your work, share your insights, and invite feedback.
- Iterate: Continuously improve your process and document your improvements.
This doesn’t just boost your output—it builds a library others can learn from.
When you document your development journey well, you learn faster and help others move faster too. Use the tools, share the path, and let the community grow stronger from it.