What Humans Can Do in the Age of AI
AI can automate more output, but humans still define direction, care, meaning, and stewardship. A practical framework for building abundance without losing our humanity.
- AI Philosophy
- Human Agency
- Empathy
- Future of Work
- Solarpunk
The AI question is often framed as replacement: what machines will take from us.
I think a better frame is allocation: what we should deliberately give to machines, and what we should deliberately deepen as humans.
If we do this well, AI is not the end of human value. It is a multiplier for human value.
Quick definitions (plain English)
- Abundance: when important goods and services become much cheaper and easier to produce.
- Solarpunk: a future-oriented worldview focused on ecological balance, local resilience, and shared prosperity.
- Distributed thinking: decision-making spread across many people and systems, rather than concentrated in one center.
- Global human organism: society understood as an interdependent system, where one region’s trauma and instability affect everyone.
- Biosphere: the total web of living ecosystems on Earth.
1. Empathy becomes strategic infrastructure
Empathy is not just a moral trait. It is a systems capability.
As automation scales, technical output gets cheaper. What stays scarce is accurate understanding of real human context: grief, trust, fear, aspiration, culture, dignity, and social fragility. Models can approximate these patterns, but humans still carry lived accountability for them.
In practice, this means future builders need to get better at:
- listening to users without reducing them to metrics,
- designing systems that preserve agency under stress,
- noticing second-order harms early,
- and making tradeoffs with compassion, not just optimization.
The more powerful our tools get, the more important empathetic judgment becomes.
2. Traditional arts and crafts are not obsolete, they are anchors
When synthetic output is abundant, handmade work gains a different kind of value.
Traditional arts, crafts, and ways of living are not nostalgia objects. They are memory systems for human meaning. They preserve slowness, ritual, intergenerational teaching, and material intimacy with the world.
In an AI-heavy economy, these practices may become even more important because they protect forms of attention that machines cannot live from the inside.
I expect a strong counter-trend: people using AI for leverage in work, while choosing more tactile, embodied, communal practices in life.
3. Machines should serve abundance on civilizational timescales
Most product roadmaps still orbit quarterly metrics.
But AI gives us a chance to plan at longer horizons: decades and even centuries. If intelligence and coordination costs keep dropping, we can tackle large-scale problems that were previously too expensive to model, simulate, and manage.
That is where solarpunk and distributed thinking become practical, not decorative:
- local-first energy and food resilience,
- climate-adaptive infrastructure,
- public-health feedback loops,
- and governance models that are transparent and participatory.
We already learned hard lessons from social media and blockchain experiments: incentives matter, concentration risk is real, and coordination at scale is messy. The next generation should carry those lessons forward instead of repeating the same centralization loops.
4. Heal the global organism, not just ship features
A lot of human suffering is cyclical: trauma, addiction, chronic illness, poverty traps, violence, displacement.
AI should be judged partly by whether it helps break these cycles.
That requires product and policy systems that improve baseline life conditions:
- better access to mental and physical health support,
- earlier intervention for risk signals,
- lower-friction education and job transitions,
- and higher-quality local institutions.
This is difficult work. But this is also where AI can create real civilizational return, not just productivity theater.
5. Lead by example so tools reflect better values
Tools inherit the intent of their builders.
If we normalize shallow, extractive, short-horizon behavior, we will encode that into the systems layer. If we normalize responsibility, coherence, and service, that also compounds.
So a key human task is cultural: model the kind of person we want automated systems to amplify.
This is leadership by behavior, not branding.
6. Prioritize biospheres, not just balance sheets
If output becomes cheaper, we have fewer excuses for extractive production models that damage ecosystems.
The next production stack should be evaluated on two axes:
- human flourishing,
- ecological regeneration.
These are not opposing goals in the long run. A destabilized biosphere eventually destabilizes every economic system built on top of it.
So “AI for growth” is incomplete. We need “AI for thriving life systems.”
7. The ambitious horizon: reduce unnecessary suffering
“Heaven on earth” sounds poetic, but it can also be operationalized.
At minimum, it means reducing avoidable suffering and raising baseline dignity for more people:
- less preventable disease,
- less involuntary scarcity,
- less bureaucratic friction for basic needs,
- more time for relationships, learning, and creation.
If automation can produce large surplus, then the central design question is not whether abundance is possible. It is how we build institutions and products that let people actually feel that abundance in everyday life.
Final position
I am optimistic.
AI can help us build a world with less drudgery and more possibility, but only if humans stay responsible for meaning, direction, and stewardship. Machines can scale production. Humans still decide what is worth producing and why.
That is our job now: use machines to create abundance, and use our humanity to choose what abundance is for.
Related notes:
- If AI Is Electricity, Agents Are Engines
- Simulacra, Reality, and Why I Am a Biological Chauvinist
- The Steering Problem: AI, Tech, and Better Human Behavior
Best,
Oli
March 10, 2026