LLMs are the primary daily tool in the stack
LLMs are the primary daily tool. Not philosophy: frequency of use across every work surface, every day.
The belief
LLMs top the daily-use stack by frequency. Not as an extension of self: as the tool you open first. Code, writing, data analysis, learning, structured thinking - LLMs handle the first pass on all of it. That is a factual usage observation, not a philosophical claim.
The specific substrate is interchangeable. When the primary goes down, you pull up a local fallback. Continuous LLM access is non-negotiable; the vendor is not.
How to apply
- Default to LLM-first on any first-pass task. Code review, draft writing, data pattern identification, concept explanation - start with the LLM before reaching for a specialist tool or doing it manually. Specialist tools enter when the LLM output has a specific gap, not before.
- Configure a multi-substrate stack, not a single-vendor dependency. Maintain at least one cloud primary and one local fallback (ollama or equivalent). When primary access drops, the fallback is already warm. Which provider delivers is a logistics question.
- Build per-domain wrappers for recurring workflows. A general-purpose session is high-friction for specialized work. Create persistent context per domain: a data analytics helper, a writing helper, a PRD summarizer. Structure each around the domain's curriculum or spec. "This has replaced my podcast addiction with a more interactive version."
- Treat the LLM as build partner, not search engine. When building, the LLM drives code generation, not just lookup. "AI building AI is now." The shift from lookup-tool to build-partner changes how much context quality matters - which is why the second-brain exists.
- Audit daily-use patterns to find where the LLM is absent. Any workflow answered by "no, I did it manually" is a candidate for a wrapper. The goal is not LLM for everything; it is to have a deliberate reason when you do not use it.
What this is not
- Not a claim that LLMs are always right. The belief is about daily-use frequency and first-pass default, not reliability or accuracy. Verification and judgment still sit with the practitioner. The LLM is the fast draft; the human is the editor.
- Not an argument for LLM-only workflows. Specialist tools - version control, structured databases, design software - stay where they are purpose-fit. LLM-first means the LLM enters before you reach for the next tool, not that the next tool disappears.
- Not mysticism about AI as an extension of self. This belief makes a frequency claim. It does not argue that LLMs think like you or that you think with them in some merged sense. That framing is imprecise and not load-bearing here.
Argues against
- "LLMs are a useful supplement, but core work should be done without them to maintain independent skill."
- "Vendor dependency risk means you should keep LLM use minimal and deliberate, not daily-primary."
- "Heavy LLM use signals low skill floor - practitioners who know their craft do not need the first-pass assist."
Where to go from here
If you want the positional form of this belief - why AI fluency is now a non-negotiable for PMs, not an optional skill - go to AI PM skillset table stakes. That belief sets the requirement; this one describes what meeting it actually looks like day to day.
If you want to understand why daily LLM use at this volume requires a persistent context layer, go to second brain is context layer. The second-brain exists because LLM-as-primary-daily-tool created a context management problem worth solving.
If you want the parent theme with the full evidence arc and the AI PM skill inventory, go to the AI PM skillset theme.
Evidence (8 dated rows - click to expand)
| Date | Entry | Post |
|---|---|---|
| 2023-03-01 | ChatGPT asked to write its own testimonial about working with the author, then published as the post body. Performative first signal that the tool had entered the stack as a creative collaborator. | urn:li:activity:7036000000000000000/" target="_blank" rel="noopener" class="urn-link">view post → |
| 2023-03-14 | Six concrete daily-use surfaces named: data analysis with Jupyter and Pandas and SQL, PRD refinement, brainstorming, Linux home-lab setup, home network security, note-structuring. "My notes have never been so actionable thanks to ChatGPT's help." Seed of the second-brain. | urn:li:activity:7041000000000000000/" target="_blank" rel="noopener" class="urn-link">view post → |
| 2023-05-10 | "my LLM friend" - casual register, no scare-quotes. Framing stabilized within two months of the March anchor. | urn:li:activity:7062000000000000000/" target="_blank" rel="noopener" class="urn-link">view post → |
| 2024-03-31 | Full stack listed in Collab Article: open-source LLMs deployed locally, ChatGPT Pro, GitHub Copilot, Bing AI with integrated browsing, personal custom GPT wrappers for data analytics, writing, PRD summarization. | urn:li:activity:7180000000000000000/" target="_blank" rel="noopener" class="urn-link">view post → |
| 2024-05-31 | "I have created a personal GPT wrapper for each subject that I am after. I tend to look up top programs, look at their curriculum and add that as a structure to my wrappers. This has replaced my podcast addiction with a more interactive version." | urn:li:activity:7202000000000000000/" target="_blank" rel="noopener" class="urn-link">view post → |
| 2024-06-05 | "ChatGPT was down yesterday... this prompted me to be prepared for the times when I have to be off-the-grid but with my trusty LLM peer." Local fallback bound to keyboard shortcut. 44 reactions. | urn:li:activity:7204000000000000000/" target="_blank" rel="noopener" class="urn-link">view post → |
| 2024-07-19 | "It's AI that is helping me code. AI building AI is now." Recursion: LLM extends the build, the build extends the LLM's role. | urn:li:activity:7220000000000000000/" target="_blank" rel="noopener" class="urn-link">view post → |
| 2026-04-21 | "I have been using my second brain for over 4 months now. Funnily this has become my preferred way to use claude." The persistent daily-use artifact is live. | urn:li:activity:7318000000000000000/" target="_blank" rel="noopener" class="urn-link">view post → |