A proposal to prevent job losses from AGI

We may have only a narrow window before big AI labs automate away all economically useful work and centralize wealth. We’re certainly on that path right now.

Some people want to pause all AI development because of this risk of human disempowerment (+ also possible extinction risk). But stopping tech development also means we give up on benefits and abundance that could come along.

Is there a middle ground? Can we keep reaping the benefits of AI without rendering humans obsolete.

I think so. Here’s how we can probably achieve it.

The issue we’re dealing with isn’t AI development, but the development of general intelligence. So, governments should start measuring AIs on a scale of generality.

Allow and encourage narrow AI that augments people and leads to medical breakthroughs (like AlphaFold) but disallow general AI that scores above a certain threshold.

We already have validated measures of adult cognitive dimensions, so we can simply use them to test new releases. In fact, a recent paper [1] measures AGI on these dimensions so governments should simply use the same ones. And because the field is evolving so fast, probably govts should review and iterate the measure every quarter.

We already have standards for car engines to prevent pollution, why not have standards for AI releases as well?

By the way, my proposal doesn’t limit research into what general intelligence is. I think humans are inherently curious and we shouldn’t limit investigation. I’m merely proposing to prevent deployment of a general intelligence.

As a counter to my proposal, an e/acc pilled person may argue that innovation is good and for certain problems we do need general intelligence. I agree, but for both counts we do have humans to fill the generality gap.

What we want is better tools, not a replacement for ourselves.

[1] A Definition of AGI

https://arxiv.org/abs/2510.18212


Join 200k followers