At the center of 2084 is not just technology. It’s a disagreement.
A serious one.
Dr. Maya Wong believes artificial intelligence can fix what humanity has broken. She sees its potential clearly; less suffering, more stability, smarter systems guiding a fragile world toward something better.
Zane Abdullah sees something else entirely.
To him, AI is not just a tool. It’s a risk. A system that could slowly take control, not through force, but through acceptance. His concern isn’t about what AI can do. It’s about what people might stop doing once it does everything for them.
That contrast drives the entire story.
And what makes it powerful is that neither side feels exaggerated. Both arguments hold weight. Both feel grounded in reality. Maya represents progress, but not blindly. Zane represents caution, but not without reason.
As a reader, you’re not pushed toward one side. You’re placed between them.
That’s where the real tension lives.
Because this isn’t just their debate. It’s ours.
Every advancement we celebrate carries a question behind it. Every solution introduces a new kind of dependency. And every system we build reflects the values we choose to prioritize.
2084 doesn’t ask you to pick a side quickly. It asks you to sit with the discomfort of not knowing.
To recognize that the future may not be decided by who is right, but by what we’re willing to accept.
And how far we’re willing to go before we stop asking questions.