Humanity, Technology, and the Question We Keep Avoiding
- Victoria Tchervenski

- Jan 11
- 4 min read
The biggest takeaway from this event wasn’t that AI is powerful (we already know that.) It was that AI needs guardrails, and not just technical ones. Social, ethical, and human guardrails.
Right now, AI can absolutely be used for harm. But it can’t choose to do harm on its own, not yet at least, and according to our panelists, we have at least a few more years before that becomes a concern. The danger comes from how people deploy it, scale it, and remove friction in the name of efficiency. As one panelist put it bluntly:
“AI will not replace us, but management will.”
Another line that stuck with me was:
“AI is best used as augmentation, not replacement.”
Augmentation keeps humans central. Replacement treats humans as a temporary inconvenience.

One thing that came up repeatedly was the reality that most AI startups fail; something like 95%, according to a research study from MIT’s NANDA research group. Not because the technology doesn’t work, but because it’s deployed without a clear understanding of human needs, incentives, or long-term trust. AI moves fast, but humans, and specifically human consumers, are the ones who decide what success actually looks like.
Eden made a point that really stuck with me: that humanity’s place in an AI-driven world may ultimately need to be determined by governments, regulations, and policy. I agree with that, but I don’t think it’s the whole story.
I think humans’ place will also be determined by market demand. We’re already seeing this with terms like “AI slop” becoming popular to describe low-quality, mass-produced AI content. It’s the same reason local art markets exist, or why people value handmade gifts. You can buy a mug or a painting cheaper online, but many people are willing to pay more for something human-made. That human element has value.
Art is the clearest example right now, but it doesn’t stop there. As people realize AI isn’t perfect, I think being human-run will become a selling point across industries. The backlash against Duolingo’s “AI-first” messaging is a good example. Most everyday people don’t want AI-first companies. They want people-first companies that use AI thoughtfully. AI-first feels efficient. People-first feels right.
And companies don’t usually think in feelings; they think in numbers. Right now, the numbers say AI is faster, cheaper, and more scalable. But numbers don’t capture trust, authenticity, or long-term brand damage when people feel replaced instead of supported.

I had a conversation with one of our panelists about their company’s work on AI managers for AI agents. Their explanation made sense: AI moves so fast that humans become the bottleneck. But when I asked how an AI manager would know when to escalate decisions to a human, the answer wasn’t entirely clear. The implicit goal seemed to be removing humans altogether.
That led me to a question I haven’t been able to shake: If a human starts an AI-run company, could AI eventually run the entire thing on its own? And, if so, do you actually need creativity to run a successful business? And can AI be creative in a way that means something?
For me, this ties back to writing, art, and communication. If you want people to read something, you probably shouldn’t have AI write it. I don’t enjoy reading AI-generated writing. If I can tell it’s AI, I can feel the un-authenticity. AI’s goal is to produce the most statistically predictable output. Humans, on the other hand, are interesting because we can be not predictable.
People value that unpredictability. They value intention. They value knowing there’s a person on the other side who cared enough to think before they wrote or painted or created. Someone behind the details and imperfections and nuance.
People also value security. Not just data security or job security, but psychological security. Knowing there’s a human somewhere in the loop who can be held accountable, who understands context, who can make judgment calls that aren’t just statistically optimal or the most likely choice.
That’s why the idea of AI managing AI agents feels unsettling. Not because it’s impossible, but because it slowly removes responsibility. Efficiency increases, but accountability fades.
What I appreciated about this event is that it didn’t feel like AI hype. No one pretended the technology is neutral, and no one pretended everything will be fine, and that AI will solve all our world's problems. The takeaway wasn’t that AI will save us or destroy us; it was that the future depends on choices that humans make. And most of those choices aren’t technical. They’re economic, political, and cultural.
The future of work won’t be decided by what AI can do. It’ll be decided by what humans choose to protect: creativity, learning, trust, accountability, and community.
AI-first is easy. People-first is harder, messier, and costs more (for now.)
But people-first is the only version that feels worth building.
-Victoria

P.S. This post was written with the help of AI. I fed ChatGPT my rough draft and notes from the panel discussion and asked it to help me tie everything together so I could get past my writer’s block.
Which, ironically, brings me right back to one of the main concerns raised during the discussion: becoming so reliant on AI that we lose the ability to do good work ourselves.
I got a little impatient with myself, and instead of sitting with the discomfort and finishing this on my own, I chose the fastest path to getting it done.
When using your own brain to write a blog post starts to feel like going to the gym (something we all know is good for us but consistently avoid) have we gone too far with AI? What could possibly be a solution to the “it’s good for you to DIY, but AI is so much easier, cost-efficient, and saves so much time!”
How long until we see AI detox apps, the way we have screen-time limit apps? Or “anti-AI” brain training tools to relearn how to write a personal email without assistance? And when those tools exist, will they be AI-generated too?
Food for thought 🙂
(or just ask ChatGPT what to think about it 😉)


Comments