Every Transformative Technology Has Faced the Same Fear. AI Is No Different. What Matters Now Is How We Choose to Use It.
By Emmanuel Paul | Caribbean Television Network (CTN)
When seatbelts were first introduced in automobiles, they were met with suspicion. Some believed they were a government mechanism for control. Others theorized they would actually cause more accidents. The conspiracy theories were plentiful. Today, seatbelts are the law in virtually every country, and the debate is over. More people buckle up than don’t.
When the internet arrived, the same cycle repeated. It was a surveillance tool, people said—a government instrument for monitoring citizens. The theories multiplied. Today, the internet reaches every corner of the globe — including places most of us could never have imagined.
Social media followed the same trajectory. In some churches, pastors warned their congregations that platforms like Facebook were instruments of darkness, tools created to corrupt the faithful and control the world. Today, every one of those churches has a Facebook page, an Instagram account, and a YouTube channel. The irony is not lost on anyone paying attention.
Now we find ourselves in the same debate — this time about artificial intelligence.
And once again, the conversation is generating more heat than light. Fear is running ahead of understanding. But beneath the noise, there is a serious discussion worth having — not about whether AI should exist, but about how it should be used.
AI Is Already Everywhere
The scale of AI adoption is no longer theoretical. According to McKinsey‘s 2025 Global AI Survey, about 90 percent of businesses report using AI in at least one capacity. Over 92 percent of Fortune 500 companies have employees using ChatGPT. ChatGPT alone surpassed 800 million weekly active users globally by mid-2025, according to OpenAI. Chief AI Officer roles now exist in 61 percent of enterprises.
The World Economic Forum’s 2025 Future of Jobs Report projects that AI and automation will displace an estimated 85 million jobs globally by 2028 — while simultaneously creating 97 million new roles. The disruption is real, but so is the creation. The question is not whether AI is coming. It is already here. The question is whether we are prepared to use it responsibly.
The Tool Is Neutral. The User Is Not
The fundamental point that too many people miss in the AI debate is this: artificial intelligence, like every tool humans have ever created, is morally neutral. It is not inherently good or evil. It is a capability. What determines its character is the person wielding it.
A musician who has spent years mastering composition, arrangement, and performance can use AI to refine a mix, experiment with a new sound, or accelerate production. There is nothing wrong with that. In fact, every major record label in the United States already uses AI in some form — from mastering tracks to analyzing listener data to generating promotional content.
But if someone with no musical training, no creative investment, and no original contribution takes the catalog of a legendary group — say, the Orchestre Septentrional, Tropicana d’Haïti, Klass, or Disip — and feeds it into an AI system to generate an album, slaps their name on it, and sells it for profit, that is not innovation. That is theft. The technology made it possible, but the decision to abuse it was entirely human.
The same principle applies across every field. A journalist who reports a story, conducts interviews, and writes an article can and should use AI tools to check facts, improve clarity, translate into other languages, and distribute more efficiently. That is responsible, professional use. But a person who types a prompt into ChatGPT, receives a fully written article, and publishes it under their own byline without any original reporting or editorial judgment — that is dishonest. It is a failure of ethics, not of technology.
A lawyer can use AI to accelerate legal research, draft motions more efficiently, and analyze case law. But AI cannot replace the judgment, the courtroom instinct, and the human understanding that a competent attorney brings to a case. A doctor can use AI-assisted surgical systems to perform operations with greater precision — and they are doing so at a remarkable rate. A meta-analysis of 25 peer-reviewed studies published in the Journal of Robotic Surgery found that AI-assisted surgeries reduced operating time by 25 percent and intraoperative complications by 30 percent, while improving surgical precision by 40 percent. The global surgical robot market is projected to reach $64.4 billion by 2034, according to Precedence Research.
But the surgeon still needs to be a surgeon. The AI assists. It does not replace expertise. It amplifies it.
Hollywood Understood This Before Most Industries
In 2023, the entertainment industry became the first major sector to confront the AI question head-on — and the confrontation was dramatic. Writers represented by the Writers Guild of America and actors represented by SAG-AFTRA, a union of approximately 160,000 entertainment professionals, went on strike simultaneously for the first time in over 60 years. AI was a central issue.
The concern was straightforward: studios wanted to use generative AI to perform tasks previously handled by paid human writers and actors, including digitally replicating performers’ faces and voices. The unions fought back and won significant protections. The WGA reached a deal in September 2023. SAG-AFTRA followed in November, with 78% of its members ratifying the agreement.
A Gallup-Bentley University survey cited by Cornell University, conducted during the strikes, found that 77 percent of the American public agreed that AI should be regulated and prohibited from replacing writers, and 82 percent agreed that actors and writers should be fairly compensated for their work. The strikes cost the California economy an estimated $3 billion, according to CNBC. Still, they set a precedent: AI must be governed by human ethics and labor protections, not left to run unchecked by corporate interests.
The Economics of AI Are Hard to Argue With
Here is a reality that anyone working in media, translation, or content production understands intimately.
Three years ago, a journalist conducting an interview in English and needing it interpreted and translated into French might have paid $300 to a professional translator — perhaps based in Senegal — and waited two days for delivery. If the story was time-sensitive, by the time the translation arrived, the news cycle had moved on. The story was dead. That was my personal experience.
Today, the same interview can be translated and interpreted using AI tools for a fraction of the cost — roughly $10 — in under 20 minutes. Is the quality identical? Not always. There is often a gap between human nuance and AI output. But for a small, independent newsroom without deep resources, the calculus is unavoidable: $10 in 20 minutes versus $300 in two days. For breaking news, the choice makes itself.
This is the economic reality of AI adoption across industries. A Morgan Stanley survey published in early 2026 found that companies using AI reported an average 11.5 percent increase in net productivity. Workers using AI tools save an average of 5.4 percent of their weekly hours, according to research compiled by Azumo. And according to ManpowerGroup’s 2026 Global Talent Barometer, regular AI usage among workers jumped 13 percent year-over-year, reaching 45 percent of the global workforce.
The numbers do not lie. AI is making people faster, more efficient, and more productive. But it is also raising uncomfortable questions about who benefits, who gets left behind, and what happens to the work that used to require a human being.
Will AI Cost Jobs? Yes. Does That Make It a Bad Tool?
The honest answer to whether AI will eliminate jobs is: it already has, and it will eliminate more. The World Economic Forum estimates 85 million jobs will be displaced globally by 2028. According to Accenture, 40 percent of all working hours across all occupations could be affected by large language models. A PwC workforce survey found that 47 percent of employees are worried that AI will replace their roles within five years.
But displacement is not the full story. The same World Economic Forum report projects the creation of 97 million new jobs — a net gain of 12 million. The nature of work is shifting, not disappearing. History has shown this pattern before: the automobile eliminated the horse-drawn carriage industry but created the modern economy. The internet decimated print newspapers but gave rise to an entirely new media ecosystem. Each disruption is painful in the short term but productive in the long run.
The key is to give workers the tools and training to adapt. According to AI adoption research, 63 percent of companies plan to reskill existing employees rather than hire AI specialists externally. Companies that invest in AI upskilling see 2.3 times higher employee retention.
The Expert Gets Better. The Mediocre Gets Exposed
There is an observation worth making, and it is one that personal experience consistently bears out: people who were already excellent in their field before AI tend to benefit the most from it. A skilled musician uses AI to enhance production, not to replace talent. A strong journalist uses AI to translate, fact-check, and distribute work faster — not to fabricate reporting. A competent doctor uses AI-assisted systems to operate with greater precision — not to skip medical school.
But if someone was mediocre before AI arrived, the technology is unlikely to save them. In fact, it may expose them. When an article reads like it was written entirely by a machine, readers notice. When a song has no soul, no lived experience behind it, listeners feel the absence. AI can replicate patterns. It cannot replicate authenticity. And in the long run, audiences know the difference.
The Real Conversation We Need to Have
The debate about AI should not be about whether the technology is good or bad. That framing is as outdated as the debates about seatbelts and the internet. AI is here. It is embedded in medicine, journalism, law, music, education, finance, and every other sector of the global economy.
The debate we need to have is about ethics, transparency, and responsibility. How do we ensure that AI is used to enhance human work rather than erase it? How do we build systems of accountability so that people cannot pass off machine-generated work as their own? How do we protect workers during the transition while still allowing innovation to move forward? How do we make sure that the economic benefits of AI are shared broadly, rather than concentrated in the hands of a few?
A person who studies criminology can use that knowledge to solve crimes — or to become a better criminal. The people who build antivirus software and cybersecurity systems study the same disciplines as the hackers they defend against. They simply chose a different purpose. The problem is not the knowledge. It is the application.
The same is true of artificial intelligence. It is a tool of extraordinary power. It can be used to create wealth, expand access, break down language barriers, and accelerate human progress — all in full transparency and with ethical intention. It can also be used to deceive, to steal, to cut corners, and to hollow out the professions that give work its meaning.
The choice is not the machine’s. It is ours.
This article is adapted from an original analysis written in Haitian Creole by Emmanuel Paul. We used AI Software to help find data from reputable sources. The original Haitian Creole text can be consulted on Emmanuel Paul’s Facebook Page. Supporting data sourced from the World Economic Forum Future of Jobs Report 2025, McKinsey Global AI Survey 2025, Morgan Stanley AI Adoption Survey 2026, ManpowerGroup 2026 Global Talent Barometer, PwC Global Workforce Survey, Accenture AI Workforce Impact Study, Precedence Research, the Journal of Robotic Surgery, CNBC, Cornell University, and the Association of Health Care Journalists.
https://ctninfo.com/?p=42069&preview=true
What Americans Really Think About AI Algorithms: Public Confidence and Transparency in Government
Click to access economic-research-chatgpt-usage-paper.pdf
Click to access WEF_Future_of_Jobs_Report_2025.pdf



