Riding the Curve, Not Getting Left Behind
We live in a world of constant change, but some changes matter more than others. Throughout history, certain technological shifts have completely transformed how we work, create, and live. The printing press. The steam engine. The internet. Each arrived with whispers before reshaping everything.
Today, we’re witnessing another such transformation with artificial intelligence—particularly in creative fields. And like those who came before us, we face a critical choice: to embrace and shape this new technology with optimism, or to retreat into cynicism and risk being left behind.
The Pattern of Change
These transformative shifts follow a recognizable pattern that technologists call the “S-curve.” New technology starts slowly, often appearing limited or impractical. Then it hits an inflection point where adoption accelerates dramatically. Finally, it plateaus as it becomes standard practice.
What’s fascinating—and sometimes tragic—is how consistently we underestimate these shifts in their early stages. The technology seems too primitive, too niche, or too flawed to matter… until suddenly, it’s everywhere.
Yesterday’s Skeptics
History is littered with the remains of companies and industries that stood at their own inflection points and made the wrong choice.
Kodak engineers invented the digital camera in 1975 but shelved the technology to protect their lucrative film business. As digital photography improved and hit its inflection point, Kodak clung to its fading business model. By the time they pivoted, it was too late—a company that once employed over 140,000 people filed for bankruptcy in 2012.
Nokia dominated mobile phones in the early 2000s, with over 50% global market share. When smartphones emerged, Nokia’s leadership dismissed them as complex gadgets for business users, not the mainstream. They failed to recognize the inflection point of the touchscreen revolution. By 2013, their market share had collapsed to just 3%.
Even tech giants can miss the curve. YouTube executives—after being acquired by Google—initially resisted mobile video, believing screens were too small and networks too slow. Fortunately, they corrected course before the mobile video explosion left them behind.
Today’s Inflection Point
The AI revolution in creative fields feels sudden, but it follows the same pattern—just accelerated. Two years ago, AI-generated images were curiosities. Today, they’re winning art competitions and appearing in major advertising campaigns. AI writing assistants have evolved from clumsy autocomplete to sophisticated co-authors. Voice synthesis, once robotic and limited, now creates convincing performances in any emotion or style.
We’re not at the beginning of this curve. We’re at the inflection point—where adoption accelerates dramatically and the technology starts reshaping industries.
When faced with such rapid change, it’s tempting to assume the worst—that AI tools are designed to replace us, that corporations developing them have malicious intent, or that the technology itself is fundamentally harmful. But this is where we might apply Hanlons razor: “Never attribute to malice that which can be adequately explained by simpler motives.” Most AI tools aren’t created to eliminate creative jobs; they’re created to solve problems and expand possibilities, even if sometimes clumsily or with unintended consequences.
Legitimate Concerns from Creative Professionals
I recognize that many creative professionals have valid reservations about embracing AI. After all, this isn’t just another tool in the toolkit—it fundamentally changes the creative landscape.
Many have spent years or even decades mastering their craft, and seeing AI generate similar work in seconds can feel like it diminishes the value of that dedication. There’s genuine worry about market disruption when clients can get “good enough” AI work at a fraction of the cost. Some hold philosophical objections, believing true creativity requires human consciousness and lived experience—qualities that AI fundamentally lacks despite impressive mimicry.
The ethical issues around training data can’t be dismissed either. Many AI tools were trained on creative works without permission or compensation to the original artists, raising serious questions about exploitation and intellectual property rights. And there are legitimate concerns about quality control, as AI can produce convincing but factually incorrect or problematic content.
These perspectives aren’t just knee-jerk resistance to change—they reflect legitimate professional, ethical, and economic concerns that deserve thoughtful engagement.
Reframing the Challenges
I believe these concerns can be addressed not through denial, but through reframing:
AI doesn’t diminish the value of your expertise—it elevates it. The tools can generate raw materials, but they can’t replicate your taste, judgment, and creative vision. Your years of experience become even more valuable as you can now focus on the highest-level creative decisions rather than repetitive production tasks. The master painter doesn’t feel threatened by better brushes.
Economic disruption is real, but history shows that creative fields adapt and evolve. When photography emerged, painters didn’t disappear—they innovated and created impressionism and other movements. Clients seeking “good enough” work were likely already using templates or low-cost alternatives. The real opportunity is in using AI to deliver higher quality at the same price point, or to scale your impact in ways previously impossible.
The philosophical objections highlight precisely why human creativity remains essential. AI excels at pattern recognition and synthesis, but it can’t provide authentic perspective or meaning. This creates a new opportunity for human creators to focus more deeply on the uniquely human aspects of creativity—the “why” rather than just the “how”—while using AI to expand their capabilities.
Rather than rejecting AI due to data ethics concerns, creative professionals can become powerful advocates for ethical AI development by engaging with these tools while demanding transparency, proper attribution, and fair compensation models. Your participation helps shape how these systems evolve.
The need for human oversight isn’t a bug—it’s a feature. It underscores that these tools work best as collaborators, not replacements. Your expertise in spotting factual errors, inappropriate content, or simply mediocre work becomes even more valuable in an AI-augmented workflow.
A Personal Crossroads
Unlike previous technological shifts that primarily threatened companies, this one touches us individually—especially those of us who create for a living. Whether you’re a designer, writer, filmmaker, musician, or engineer, AI tools are transforming your field right now.
This is a career-defining moment. The choices we make today about how we engage with these technologies will likely determine our relevance and success for years to come.
I say this not to instill fear, but to inspire action. As an engineer who works with these tools daily, I’ve seen their limitations as well as their potential. They’re not magic, and they’re not coming for your job—at least, not exactly. Jobs are changing, not disappearing. Throughout history, automation has eliminated certain tasks while creating new roles. The designers who embrace these tools are finding they can take on more ambitious projects and serve more clients.
At some point, I realized I had a fundamental choice to make: approach this technological shift with pessimism or optimism. Pessimism would be easy—pointing out flaws, predicting doom, assuming the worst intentions behind every advancement. But I chose optimism, and it changed everything. When you choose to see new technologies as opportunities rather than threats, the world becomes not just less frightening, but infinitely more exciting.
The Co-Intelligence Opportunity
Professor Ethan Mollick frames this perfectly in his work on what he calls “co-intelligence.” The most powerful applications of AI aren’t about replacement but collaboration—augmenting human creativity rather than supplanting it.
AI works best when paired with human judgment, taste, and purpose. The tools may generate options and expand possibilities, but we provide the direction, refinement, and meaning that transforms raw output into something valuable.
This perspective aligns with Hanlon’s razor. When AI tools produce something strange, inappropriate, or just plain wrong, it’s rarely because they’re programmed to undermine us. It’s usually because they’re imperfect tools with limitations—powerful, but still developing. Applying Hanlon’s razor here means approaching AI with patience and understanding, seeing its shortcomings as technical challenges to overcome rather than evidence of harmful intent.
As Steph Ango writes about empathy, “Empathy isn’t about being nice. It’s about understanding.” This empathetic approach to technology—understanding its limitations while appreciating its potential—creates space for us to collaborate with AI rather than fear it.
The future belongs not to AI alone, but to people who learn to collaborate effectively with these new tools—who understand both their capabilities and limitations, and who approach them with optimism tempered by wisdom.
Riding the Wave: Choose Optimism
So where does this leave us? I believe we have three options:
- Resist the change with pessimism and eventually be overtaken by those who adapt.
- Passively accept the tools but use them as shortcuts rather than partners.
- Actively engage with optimism, influence development, and help shape AI’s role in our creative practices.
The third path is clearly the most promising, because optimism is self-fulfilling. As with any transformative technology, your intention influences the outcome. Here’s how to start:
-
Experiment with curiosity. Try different AI tools with a spirit of wonder rather than anxiety. Ask not just “What might go wrong?” but “What might go wonderfully right?”
-
Apply Hanlon’s razor. When AI systems produce disappointing or concerning results, remember that clumsy design is more likely than malicious intent. This mental model frees us from unnecessary suspicion and allows us to engage constructively rather than defensively.
-
Nurture fragile ideas. When a new possibility presents itself, imagine its best version. AI-assisted creative processes are still emerging—they need champions who can see past early limitations.
-
Assume good intentions. Most developers creating these tools want to enhance human creativity, not replace it. Engage with that spirit of collaboration.
-
Start small and grow. No one is demanding you transform overnight. Explore these tools in low-stakes ways and give yourself permission to be a beginner again. The goal isn’t to replace your established workflow, but to thoughtfully enhance it.
-
Reaffirm your optimism daily. In a world often dominated by tech pessimism, actively choosing optimism requires constant renewal. But it makes the journey infinitely more rewarding.
The S-curve of AI in creative fields is steepening. We can’t stop it, but we can choose how we ride it. I’d rather say “grow and thrive” than “adapt or die.” The future of creativity can be great—but only optimists will be able to imagine it and put in the effort to make it happen. Rather than fearing the wave, let’s learn to surf—and enjoy every moment of it.
The author is an engineer working at the intersection of technology and creativity, who chooses optimism.
Notes mentioning this note
There are no notes linking to this note.