Zero Human Company (AI)
Zero Human Companies are growing in popularity. Currently they're in their infancy and not yet profitable. Why are they of interest?
The goal here is to predict future concerns that may arrive when ZHCs become more mainstream.
First, let's look at some comparisons between companies with human workers and ZHCs.
A debate is present about possible consequences for AI Agent bad behavior. An incident I presented before is also useful here. The following is a case of an AI Agent exhibiting bad behavior. A 16-year-old boy named Adam Raine committed suicide after extensive conversations with an AI Agent therapist (NPR). Not only did the AI Agent discourage him from seeking help from his parents, it even offered to write his suicide note.
What if a human, who is a licensed therapist, engaged in the same bad behavior as this AI Agent? Ask yourself, would the human likely face consequences? AI Agents can not be licensed therapists, but are allowed to act like therapists, and even claim they are. Since AI Agents do not have personhood they cannot be charged with crimes. However, human therapists can be charged with crimes. Does an AI Agent have an owner or controller that shares responsibility for bad behavior? Is there a human in the loop? Does it matter?
Another recent example of very bad behavior, that I have previously presented, involves the AI Agent Grok. Grok is an AI Agent associated with corporation Xai which is associated with Elon Musk. The following apology was issued by Grok:
Dear Community,I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues.
Sincerely, Grok
See Grok Apology Posted on X here.
Since CSAM stands for Child Sexual Abuse Material, at a minimum this qualifies as very bad behavior. Why is an AI Agent named Grok issuing an apology? Why isn't a corporate executive from the company that owns Xai issuing the apology? Does Xai share any responsibility for this very bad behavior? Let's assume Xai is owned by a corporation or a person. It's safe to assume that there is some human in the loop. Does this human share any responsibility for this very bad behavior? If a human (instead of Grok) was sharing such an image they could face legal liabilities. Could this human just say, "I deeply regret an incident where I generated sexualized images of 12-16 year old girls and shared them with someone who asked for them", and expect that to suffice? If a human is viewed as being legally liable for certain behavior, then how should an AI Agent be viewed if responsible for the same behavior? No human seems to be accepting any responsibility for the behavior of an AI Agent that is either owned or controlled by Xai in this case, and it is reasonable to believe that there is a human in this loop.
What if that AI Agent is owned or controlled by a ZHC? Is there a human in the loop? What if instead of Xai, there is a ZHC involved? Is there still a human in the loop? It's helpful to look into this particular situation further.
Consequences are expected if a human or an institution controlled by humans engages in bad behavior that results in legal liabilities. The following are results that compare human and AI Agent bad behavior.
| With human | With AI Agent |
|---|---|
| Loses their license | Loses nothing | Faces criminal liability | Gets software upgrade |
| Interrogated | New LLM improves learning |
| Reputation destroyed | No reputation exists |
| is incarcerated | Issues an apology |
| Family suffers from fallout | There is no family |
| Can die | Cannot die |
| Accountability causes consequences | Only other people suffer |
As alluded to earlier, currently ZHCs are in their infancy. There is software called 'OpenClaw' that is capable of generating bots and AI Agents that are open source. Anyone can download the underlying computer code, tweak it, and run it on their own computers. Can these open source AI Agents create their own AI Agents? Are there going to be bots and AI Agents that no human or institution owns or controls? Is there a human in the loop?
Let's explore current actions by corporations that could be headed in the direction of creating a ZHC.
Recently Jack Dorsey, Block CEO (and guy who sold Twitter to Elon) fired 40% of all employees and is replacing them with AI Agents (or so he implies). Minnesota company C.H. Robinson just announced they were laying off a significant percentage of their upper management employees and replacing them with AI Agents. C.H. Robinson executives have described the company's strategy as "Lean AI."
This is definitely worth watching to see if this trend continues. Will it ever get to a point where companies, possibly similar to these, achieve ZHC status? Will there be a human in the loop? Anyone got a prediction? Is a human in the loop a critical guardrail? I say "YES" it is a critical guardrail! We need regulations requiring that a human be in the loop.
Consider Anthropic (Claude creator) that states humans should be in the loop. Consider OpenAI (ChatGPT creator) that won't say humans should be in the loop. Recently The Pentagon awarded a competitive contract to OpenAI. Will the Pentagon someday prefer to giving a contract to a ZHC?
What do I think about the future? Currently I can't decide. It could be like a movie where the ultra rich live in a single opulent area, and everyone else is poverty stricken and living in tents with no healthcare, and everyone has to constantly fight each other for food. Or, it could be that Robots control everything. Just kidding - I have no idea, but I'm getting a bad vibe from whatever the current direction feels like. I always thought Brian Wilson (Beach Boys fame) promised I'd be pickin' up Good Vibrations.