This essay is part of the series in which I talk about my learnings and insights building a habit coaching app (Nintee) in 2024. It didn’t ultimately work out because an app has marginal influence in a human’s life (v/s that of friends, family, culture and immediate environment). Most apps that work in the category operate like gyms (charge upfront when the motivation is high, and be okay with high churn). I had raised VC funding for it and later it became clear to me that this wouldn’t be a VC scale business, so I shut it down and returned the remaining funding. Hope the insights learned along the way would turn out to be valuable to others. ...
All posts by Paras Chopra
How does behavior change happen
This essay is part of the series in which I talk about my learnings and insights building a habit coaching app (Nintee) in 2024. It didn’t ultimately work out because an app has marginal influence in a human’s life (v/s that of friends, family, culture and immediate environment). Most apps that work in the category operate like gyms (charge upfront when the motivation is high, and be okay with high churn). I had raised VC funding for it and later it became clear to me that this wouldn’t be a VC scale business, so I shut it down and returned the remaining funding. Hope the insights learned along the way would turn out to be valuable to others. ... Read the entire post →
Learning flywheels are all you need
One intuition pump for the future of AI is to see what happened with human intelligence in our evolutionary past.
Our ancestors 100k years ago had the same cognitive capacity as us (evolution works slowly) and yet all modern technology and knowledge has only emerged in the last 1000 years or so.
Why such a sudden jump?
It’s not because our individual intelligence improved, but that we assembled learning flywheels over time (writing, books, schools, colleges, scientific method) and those caused each individual to be compound over the previous generation leading to the culture explosion that were going through. ... Read the entire post →
The two views of rationality
This essay is part of the series in which I talk about my learnings and insights building a habit coaching app (Nintee) in 2024. It didn’t ultimately work out because an app has marginal influence in a human’s life (v/s that of friends, family, culture and immediate environment). Most apps that work in the category operate like gyms (charge upfront when the motivation is high, and be okay with high churn). I had raised VC funding for it and later it became clear to me that this wouldn’t be a VC scale business, so I shut it down and returned the remaining funding. Hope the insights learned along the way would turn out to be valuable to others. ... Read the entire post →
What does your AI dream of?
I recently built an open source Chrome extension that scrapes titles of your conversations with Claude and ChatGPT and then asks an AI to imagine a visual that best captures your current state of mind.
So imagine you’re talking about statistics / correlation and when you open up a new tab, you see this:

The project is open source, so you can try it yourself: https://github.com/paraschopra/murmuration
You’ll need an OpenRouter key and note that it may cost ~$5/mo for generating 3 visuals/day with Sonnet 4.6. ... Read the entire post →
Making a product that Marl loves
This essay is part of the series in which I talk about my learnings and insights building a habit coaching app (Nintee) in 2024. It didn’t ultimately work out because an app has marginal influence in a human’s life (v/s that of friends, family, culture and immediate environment). Most apps that work in the category operate like gyms (charge upfront when the motivation is high, and be okay with high churn). I had raised VC funding for it and later it became clear to me that this wouldn’t be a VC scale business, so I shut it down and returned the remaining funding. Hope the insights learned along the way would turn out to be valuable to others. ... Read the entire post →
My Claude Code workflow
Half my time goes into using Claude Code, and the other half goes into optimizing my workflow for it.
What I have now:
• Everything organized by sprints in ./sprints/v1, v2 etc folder
• Custom command /prd to help me brainstrom requirements for a sprint and break it down into atomic tasks (which should take 5-10 mins each)
• Custom command /dev to pick highest priority task in a prd and follow test driven development to implement the tasks
• Custom command /walkthrough to write a sprint review report that details what code was produced so I can read and understand exactly what the code does ... Read the entire post →
Words are minimum-viable coordination tools
Words have a bewitching tendency as we assume they point to some deep essences. But, game theoretically speaking, words exist to get a job done so they operate at the level of coarse graining that’s sufficient to get the job done of the speaker.
Evolution doesn’t like to waste energy. Hence all communication between people is a coordination tool where all parties are interested in getting their job done, but not wanting to invest more energy than it’s necessary to do so.
So if someone uses the word “God” or “Love”, the job is done if it elicits the emotions, actions and associations roughly associated with what the speaker intended so our search for what those words “truly” mean is just misguided. Meaning is in what the exchange does in a particular context. By themselves, words are empty. ... Read the entire post →
Nobody cares about your idea.
The most important question you should be asking while developing a product / startup is this:
why would anyone change their behaviour to accommodate your product?
And the only correct answer to this question is that they’re *already* exhibiting the behaviour and your product will simply help them be >2x more efficient on a dimension they cares about.
Behaviour change is tough. Internalize that nobody changes their behaviour for marginal or incremental benefits.
And nobody certainly cares about your idea. ... Read the entire post →
A proposal to prevent job losses from AGI
We may have only a narrow window before big AI labs automate away all economically useful work and centralize wealth. We’re certainly on that path right now.
Some people want to pause all AI development because of this risk of human disempowerment (+ also possible extinction risk). But stopping tech development also means we give up on benefits and abundance that could come along.
Is there a middle ground? Can we keep reaping the benefits of AI without rendering humans obsolete.
I think so. Here’s how we can probably achieve it. ... Read the entire post →