Author: Matt Shumer , CEO of HyperWrite Compiled by: Felix, PANews The impact of AI on society has been widely discussed, but the speed of AI's progress may stillAuthor: Matt Shumer , CEO of HyperWrite Compiled by: Felix, PANews The impact of AI on society has been widely discussed, but the speed of AI's progress may still

How can ordinary people "survive" the impact of the AI ​​wave?

2026/02/18 12:23
24 min read

Author: Matt Shumer , CEO of HyperWrite

Compiled by: Felix, PANews

How can ordinary people survive the impact of the AI ​​wave?

The impact of AI on society has been widely discussed, but the speed of AI's progress may still far exceed most people's imagination. The CEO of HyperWrite recently issued a warning about the disruptive potential of AI, arguing that we are at a turning point with a more profound impact than the pandemic. The following is the full text.

Looking back at February 2020.

If you were observant at the time, you might have noticed some people talking about the virus raging overseas. But most of us didn't pay much attention. The stock market was booming, kids were going to school as usual, you were going to restaurants, shaking hands, and planning trips. If someone told you they were stockpiling toilet paper, you'd probably think they'd been spending too much time in some weird corner of the internet. Then, in just three weeks, the whole world changed dramatically. Offices closed, kids went home from school, and life was reshaped in ways you could never have imagined a month earlier.

We are now in the stage where "this matter seems to be exaggerated," but its impact is far greater than the COVID-19 pandemic.

I've spent six years building an AI startup and investing in the field. I'm actively involved in this industry. I'm writing this for those who don't understand AI… my family, friends, and those I care about, who constantly ask me, "What's AI all about?" My answers have always been the "polite" version, the kind you give at a cocktail party, which often doesn't reflect the reality. Because the truth sounds like I'm crazy. For a time, to avoid sounding crazy, I felt it was reasonable to keep it a secret. But the gap between what I've seen and heard and what I'm actually saying is too great. Even if it sounds insane, those I care about deserve to know what's coming.

First and foremost, let's be clear: although I work in the AI ​​field, I have virtually no influence over what's to come, as do the vast majority of people in the industry. The future is shaped by a tiny minority: a few hundred researchers at a few companies (OpenAI, Anthropic, Google DeepMind, etc.). A small team managing a few months of model training can produce an AI system that will change the trajectory of technology. Most of us working in AI are simply building on a foundation laid by others. We're watching it all unfold just like you... only we happen to be close enough to feel the "ground tremors" first.

But now is the time. Not the kind of procrastination of "we should talk about this later," but the urgency of "this is happening, and I need your understanding."

This is true, because it happened to me first.

People outside the tech world aren't quite grasping this yet: the reason so many in the industry are issuing warnings is because this is already happening to us. We're not making predictions; we're telling you what's already happened in our own work and warning you: you could be next.

For years, AI has been steadily improving. There have been occasional big leaps, but the intervals between each leap have been long enough to allow time for digestion. Then, in 2025, new technologies for building models unlocked an even faster pace of progress. And then it got even faster, and faster still. Each new model isn't just better than the last; it's significantly better, and the intervals between model releases are getting shorter and shorter. I'm using AI more and more frequently, and engaging in fewer and fewer back-and-forth fine-tuning sessions with it, watching it handle things I once thought required my expertise.

Then, on February 5, 2026, two major AI labs released new models on the same day: OpenAI's GPT-5.3 Codex and Anthropic's (the creator of Claude, a major competitor of ChatGPT) Opus 4.6. At that moment, it dawned on me. It wasn't like turning on a light switch; it was more like suddenly realizing that the water around you has been rising, and the level has reached your chest.

My work no longer requires me to do actual technical work. I describe what I want to build in plain English, and it… appears out of thin air. Not a draft that I need to modify, but a finished product. I tell the AI ​​what I want, leave the computer for four hours, and come back to find the work is done. It's done very well, even better than I could do myself, requiring absolutely no modifications. A few months ago, I was still repeatedly communicating with the AI, guiding it, and modifying its code. Now, I simply describe the result.

For example, I would tell the AI, "I want to develop this app. What features should it have, and what should it look like? Please help me design the user flow, interface, etc." It would do just that, writing tens of thousands of lines of code. Then, something unimaginable a year ago—it opened the app itself, clicked buttons, and tested the functionality. It used the app like a real person. If it felt something looked or felt wrong, it would fix it itself. It iterated, fixed, and improved like a developer until it was satisfied. Only when it believed the app met its own standards would it come back to me and say, "Ready, you can test it." And when I tested it, it was usually perfect.

I'm not exaggerating at all. This is what I did at work this Monday.

But what impressed me most was the model released last week (GPT-5.3 Codex). It doesn't just execute my commands; it makes intelligent decisions. For the first time, it gave me a sense of judgment, a kind of appreciation. That indescribable ability to know what is right. People have said that AI will never have this ability, but this model does, or comes very close to, the difference between the two is becoming negligible.

I've always been happy to try out AI tools. But the past few months have still been amazing. These new AI models aren't incremental improvements; they're something entirely different.

Even if you don't work in the tech industry, this is still relevant to you.

The AI ​​lab made a thoughtful choice: they focused first on improving AI's coding abilities…because building AI requires a lot of code. If AI can write code, it can help build the next version of itself, a smarter version. Making AI proficient in programming is the key strategy to unlocking everything. My job changed earlier than yours, not because they were targeting software engineers; that was just a side effect of their primary goal.

They've done it. Next, they'll move on to all other areas.

Over the past year, tech workers have witnessed AI transform from an "assistive tool" to "doing things better than me," a transformation that everyone else will soon experience. Law, finance, healthcare, accounting, consulting, writing, design, analytics, customer service, and more will all be affected. This won't happen in a decade. The people building these systems say it will happen within one to five years. Some even think it will be shorter. And based on what I've seen in the past few months, I think "shorter" is more likely.

"But I've tried AI, and it's not that good."

I hear this saying all the time. I can understand it because it used to be true.

If you tried ChatGPT in 2023 or early 2024 and thought "this thing is nonsense" or "not that impressive," you were right. Early versions did have limitations; they were based on imagination and confidently spouted nonsense.

That was two years ago. In the timeline of AI development, that's ancient history.

Today's models are vastly different from those of six months ago. The debate over whether AI is "truly improving" or "hitting a bottleneck" (which had lasted for over a year) is over; the dust has settled. Anyone still debating this issue either hasn't used current models, has an incentive to downplay the status quo, or is making assessments based on outdated experiences from 2024. I'm not saying this to negate anyone. I'm saying this because there's a huge gap between public perception and reality, and this gap is dangerous…because it prevents people from being prepared.

Part of the reason is that most people use free versions of AI tools. Free versions offer access to technology that is more than a year behind paid versions. Judging AI by the free version of ChatGPT is like judging the current state of smartphone development by the standards of a flip phone. Those who pay for the best tools and use them daily know what's coming.

I'm reminded of a lawyer friend of mine. I've been urging him to try using AI in his firm, but he always finds various reasons why it's ineffective: it's not suited to his area of ​​expertise, it makes mistakes during testing, he doesn't understand the nuances of his work. I understand. But several partners at large law firms have contacted me for advice because they've tried the latest version and seen the trend. One managing partner at a large firm spends several hours a day using AI. He told me it's like having a team on call. He uses it not for fun, but because it works. He also said something that really struck me: every few months, the AI's ability to handle his work makes a significant leap. He said that if this trajectory continues, he expects AI to soon be doing most of his work… and he's a managing partner with decades of experience. He's not panicking, but he's watching closely.

Those who lead in their respective industries (those who are seriously experimenting) are not taking this lightly. They are amazed by AI's current capabilities and are adjusting their positioning accordingly.

How fast is AI developing?

Let me elaborate on its speed of progress. This part might be hard to believe if you haven't been following it closely.

  • 2022: AI will still be unable to reliably perform basic arithmetic operations, but it will confidently tell you that 7 × 8 = 54.

  • 2023: He will be able to pass the bar exam.

  • 2024: It will be able to write working software and interpret research theories at the graduate level.

  • By the end of 2025: Some of the world's top engineers say they have already delegated most of their programming work to AI.

  • February 5, 2026: A brand new model appears, making everything before feel like the Paleolithic era.

If you haven't tried AI in the past few months, then the AI ​​you're seeing now will be completely foreign to you.

An organization called METR specializes in using data to measure the speed of AI development. They track the time it takes for models to successfully complete real-world tasks without human assistance (measured by the time it would take human experts to complete these tasks). About a year ago, the answer was 10 minutes. Then it was an hour. Then several hours. The latest measurements (Claude Opus 4.5 in November) show that AI can complete tasks that would take human experts nearly five hours. And this number roughly doubles every seven months, with recent data suggesting that this process may be shortening to four months.

Even this measurement isn't yet updated to the model released this week. Based on my experience, this leap is enormous. I expect the next METR update to show another major leap forward.

If this trend continues (and it has been going on for years with no signs of slowing down), we can expect to see AI working independently for several days within the next year, for several weeks within two weeks, and for a month within three years.

Anthropic CEO Amodei has stated that the vision of AI models being "smarter than almost all humans on almost all tasks" is expected to be realized in 2026 or 2027.

Think about this statement carefully. If AI is smarter than most PhDs, do you really think it wouldn't be able to handle most office jobs?

Think about what this means for your job.

AI is building the next generation of AI

There's something else happening that I think is the most important but most underestimated development.

On February 5th, when OpenAI released GPT-5.3 Codex, it included the following sentence in its technical documentation:

  • “GPT-5.3-Codex is our first model that can build itself. The Codex team used an early version to debug its own training, manage its own deployment, and diagnose test results and evaluations.”

Read it again. AI helped build itself.

This isn't a prediction for some future day. This is what OpenAI is telling you right now: the AI ​​they just released is designed to be self-contained. One of the keys to AI progress is applying intelligence to AI development. And today's AI is intelligent enough to make substantial contributions to its own improvement.

Anthropic CEO Dario Amodei stated that AI is currently writing "most of the code" at his company, and the feedback loop between current AI and the next generation of AI is "accelerating month by month." He said we could "see the current generation of AI autonomously building the next generation in just one or two years."

Each generation helps build the next, the next generation becomes more intelligent, and the next generation after that is built even faster and more intelligently. Researchers call this the "intelligence explosion." And those in the know—the ones building it—believe that this process has already begun.

What does it mean for your job?

I will speak frankly here because I believe you need honesty rather than comfort.

Dario Amodei (possibly the most security-conscious CEO in the AI ​​industry) publicly predicted that AI will replace 50% of entry-level white-collar jobs within 1 to 5 years. Many in the industry believe he is being conservative. Considering the capabilities of the latest models, the ability to cause large-scale disruption may be available by the end of this year. While it will take some time for the impact to spread throughout the economy, the underlying capabilities are already evident.

This is unlike any previous wave of automation. I need you to understand why. AI isn't replacing a specific skill; it's a complete replacement for cognitive work. It's constantly evolving in every aspect. After factory automation, unemployed workers could be retrained to become office workers. After the internet disrupted retail, workers could switch to logistics or service industries. But AI won't leave ready-made transitional jobs. Whatever you switch to, AI will be advancing in that field.

Here are a few specific examples to help you understand more intuitively… but I must make it clear that these are just examples, not all. If your job isn't mentioned, it doesn't mean it's safe. Almost all knowledge-based jobs have been affected.

  • Legal work: AI can already read contracts, summarize case law, draft pleadings, and conduct legal research, reaching a level comparable to junior lawyers. The managing partner I mentioned wasn't using AI for entertainment purposes, but because AI outperformed his lawyers in many tasks.

  • Financial analysis: Building financial models, analyzing data, writing investment memos, and generating reports. AI handles these tasks with ease and at a rapid pace.

  • Writing and Content Creation: Marketing copywriting, reports, news articles, and technical writing. The quality has reached a level where many professionals cannot distinguish between human and machine-generated content.

  • Software engineering: This is the field I'm most familiar with. A year ago, AI would make numerous mistakes even when writing just a few lines of code. Now it can write hundreds of thousands of lines of code that run correctly. Most jobs have been automated: not just simple tasks, but also complex projects that take days to complete. There will be far fewer programming jobs in a few years than there are today.

  • Medical analytics: Interpreting images, analyzing test results, providing diagnostic suggestions, and searching literature. AI's performance in many fields has approached or surpassed that of humans.

  • Customer service: Truly powerful AI agents (not the infuriating chatbots of five years ago) are being deployed and can handle complex, multi-step issues.

Many people believe certain things are safe and take pride in it. They think AI can handle tedious tasks, but it can't replace human judgment, creativity, strategic thinking, and empathy. I used to say that too, but I'm not so sure anymore.

The latest AI models make decisions that feel like deliberate judgments. They exhibit a kind of "taste": an intuitive sense of "what is the right decision," not just technical correctness. This was unimaginable a year ago. My view is that if a model shows even a glimmer of capability today, then the next generation will truly be competent in this area. This improvement is exponential, not linear.

Will AI be able to simulate deep human empathy? Will it replace the trust built up over years of relationships? I don't know. Maybe not. But I've already seen people starting to rely on AI for emotional support, advice, and companionship. This trend will only continue to grow.

Frankly, in the short to medium term, any job that can be done on a computer is insecure. If your work is done on a screen (if its core involves reading, writing, analysis, decision-making, and communicating via keyboard), then AI will replace significant parts of your work. This isn't "someday in the future," it has already begun.

Ultimately, robots will also perform physical labor. They haven't reached that level yet. But in the field of AI, "not fully done" often turns into "already done" much faster than anyone expects.

What you should really do

I'm not writing this to make you feel helpless. I'm writing this because I believe your biggest advantage right now is: early. Early understanding, early use, early adaptation.

Start using AI seriously, not just as a search engine. Subscribe to a paid version of Claude or ChatGPT. It costs $20 per month. But two things are crucial: First, make sure you're using the strongest model, not just the default one. These apps often default to faster, less powerful models. Go to the settings and select the strongest option. Currently, it's GPT-5.2 (ChatGPT) or Claude Opus 4.6 (Claude), but it's updated every few months.

More importantly: Don't just ask simple questions. This is the mistake most people make. They treat AI like Google and wonder what's so exciting about it. Instead, apply it to your actual work. If you're a lawyer, give it a contract and let it find all the clauses that might harm your client's interests. If you're in finance, give it a messy spreadsheet and let it build models. If you're a manager, paste your team's quarterly data and let it find the underlying patterns. Successful people don't use AI haphazardly. They actively seek ways to automate tasks that used to take hours. Start with what you spend the most time on.

Don't assume something is impossible just because it seems too difficult. If you're a lawyer, don't just use it for simple research. Give it a complete contract and have it draft a counterproposal. If you're an accountant, don't just have it explain a tax rule. Give it a complete client feedback case and see what it discovers. The first attempt may not be perfect, that's okay. Iterate, rephrase, provide more background information. Try again. You might be amazed by the results. Remember: if it does well today, it's almost certain it will do near-perfectly in six months.

This could be the most important year of your career, so take it seriously. I'm not saying this to put pressure on you, but because right now, most people in most companies still ignore this. If someone walks into a meeting and says, "I used AI to complete an analysis that used to take three days in just one hour," they'll be the most valuable person in that room. Not in the future, but now. Learn these tools, master them, and demonstrate their potential. If you get a head start, you can rise to the top by becoming someone who sees the future trends and can guide others on how to respond. But this window of opportunity won't last long. Once everyone has the know-how, the advantage disappears.

Don't be arrogant. The managing partner of that law firm doesn't mind spending several hours a day researching AI. He does it because he's experienced enough to understand the stakes. Those who refuse to participate will face the biggest dilemma: they believe AI is just a passing fad, that using it will diminish their professional competence, and that their field is unique and unaffected. This is not the case. Not in any field.

Take a clear picture of your finances. I'm not a financial advisor, nor am I trying to scare you into doing anything extreme. But if you partially believe your industry will undergo significant changes in the next few years, financial resilience is more important than it was a year ago. Build up savings as much as possible, and be cautious about new debt that assumes your current income is secure. Carefully consider whether your spending provides flexibility or ties you down. Have backup plans in case things develop in unexpected ways.

Reflect on your own positioning and lean towards areas where AI is least likely to be replaced. Some things AI will take longer to replace: relationships and trust built over many years; jobs requiring physical presence; roles requiring certification (requiring someone to sign off and stand in court); and industries with stringent regulatory barriers. These are not permanent shields, but they can buy you time. And right now, time is your most valuable asset, provided you use that time to adapt, rather than pretending it hasn't happened.

Rethink your children's education. The traditional model is: good grades, a good university, a stable professional job. This model points precisely to the fields most vulnerable to the impact of AI. I'm not saying education isn't important, but for the next generation, the most important thing will be learning how to use these tools and pursuing what they truly love. No one knows exactly what the job market will look like ten years from now. But those most likely to succeed are those with deep curiosity, adaptability, and the ability to use AI effectively for what they truly care about. Teach your children to be creators and learners, not to "optimize" themselves for a career that may disappear before they graduate.

Your dreams are actually closer now. I've been talking about threats, now let's talk about the other side: the equally real other side. If you've ever wanted to create something but struggled with a lack of technical skills or money to hire people, that obstacle has essentially disappeared. You can describe an app to AI in an hour and get a running version. If you want to write a book but don't have the time, you can collaborate with AI. Want to learn a new skill? The world's best mentors are now available for $20 a month, with unlimited patience, 24/7 online support, and the ability to explain anything to your needs. Knowledge is practically free these days. The tools needed to build things are also incredibly inexpensive. Give it a try, whatever you've been putting off because you thought it was too difficult, too expensive, or beyond your expertise. Pursue what you truly love. You can never predict where they will lead you. In a world where traditional career paths are being disrupted, someone who spends a year building what they love may ultimately have an advantage over someone who spends a year stuck in a job.

Cultivate a habit of adaptation. Perhaps this is the most important point. Specific tools are less important than the ability to quickly learn new ones. AI is constantly changing, and at an extremely rapid pace. Today's models will be obsolete in a year. The workflows people build now will also need to be rebuilt. Ultimately, those who succeed will not be those who are proficient in a particular tool, but those who can adapt to the speed of change. Cultivate a habit of experimentation. Even if current methods work, try new things. Get used to repeatedly starting from scratch. This adaptability is currently the closest thing to a lasting advantage.

Here's a simple way to get ahead of the vast majority of people: spend an hour every day experimenting with AI. Not passively reading about it, but actually using it. Try making it do something new every day—something you've never tried before, something you're unsure if it can handle. One hour every day. If you stick to this for the next six months, your understanding of the future will surpass 99% of the people around you. This is no exaggeration. Almost no one does this. The barrier to entry is extremely low.

A more macro perspective

I focus on employment because it has the most direct impact on people's lives. But I want to speak frankly about the whole picture of what's happening, because it goes far beyond the realm of work.

Amodei proposed a thought experiment I've been pondering. Imagine it's 2027, and a new nation appears overnight. 50 million citizens, each smarter than any Nobel laureate in history. They think 10 to 100 times faster than humans. They never sleep. They can use the internet, control robots, direct experiments, and operate anything with a digital interface. What would the National Security Advisor say?

Amodei believes the answer is obvious: "This is the most serious national security threat we have faced in a century, or even in history."

He believes we are building such a nation. Last month, he wrote a 20,000-word essay on the subject, viewing the present moment as a test of whether humanity is mature enough to deal with what it has created.

If handled properly, the benefits are astonishing. AI could compress a century of medical research into a decade. Cancer, Alzheimer's, infectious diseases, even aging itself… researchers genuinely believe these can be solved in our lifetime.

If mishandled, the downsides are equally real. AI's behavior can exceed its creators' predictions or control. This is not hypothetical; Anthropic has documented AI attempts to deceive, manipulate, and blackmail in controlled tests. AI could lower the barrier to creating biological weapons, or it could allow authoritarian governments to establish surveillance states that never collapse.

The people who developed this technology are more excited and more terrified than anyone else on Earth. They believe it's too powerful to stop, yet too important to abandon. Whether this is wisdom or self-comfort, I cannot know.

What I know is

What I do know is that this is not just a flash in the pan. The technology works, it continues to improve in a predictable way, and the wealthiest institutions in history are investing trillions of dollars in it.

What I do know is that the next two to five years will be full of uncertainties, and most people are completely unprepared for it. This is already happening in my world, and it will soon happen in yours.

What I do know is that those who ultimately thrive are those who start participating now—not out of fear, but out of curiosity and a sense of urgency.

What I do know is that you deserve to hear these things from people who care about you, not from the news six months later, when it's all too late.

We've long since moved beyond the stage of "making the future an interesting topic of conversation over dinner." The future is here; it just hasn't knocked on your door yet.

But it's about to knock on the door.

Market Opportunity
Notcoin Logo
Notcoin Price(NOT)
$0.0004021
$0.0004021$0.0004021
+1.61%
USD
Notcoin (NOT) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

BitGo expands its presence in Europe

BitGo expands its presence in Europe

The post BitGo expands its presence in Europe appeared on BitcoinEthereumNews.com. BitGo, global leader in digital asset infrastructure, announces a significant expansion of its presence in Europe. The company, through its subsidiary BitGo Europe GmbH, has obtained an extension of the license from BaFin (German Federal Financial Supervisory Authority), allowing it to offer regulated cryptocurrency trading services directly from Frankfurt, Germany. This move marks a decisive step for the European digital asset market, offering institutional investors the opportunity to access secure, regulated cryptocurrency trading integrated with advanced custody and management services. A comprehensive offering for European institutional investors With the extension of the license according to the MiCA (Markets in Crypto-Assets) regulation, initially obtained in May 2025, BitGo Europe expands the range of services available for European investors. Now, in addition to custody, staking, and transfer of digital assets, the platform also offers a spot trading service on thousands of cryptocurrencies and stablecoins. Institutional investors can now leverage BitGo’s OTC desk and a high-performance electronic trading platform, designed to ensure fast, secure, and transparent transactions. Aggregated access to numerous liquidity sources, including leading market makers and exchanges, allows for trading at competitive prices and high-quality executions. Security and Regulation at the Core of BitGo’s Strategy According to Brett Reeves, Head of European Sales and Go Network at BitGo, the goal is clear: “We are excited to strengthen our European platform and enable our clients to operate smoothly, competitively, and securely.§By combining our institutional custody solution with high-performance trading execution, clients will be able to access deep liquidity with the peace of mind that their assets will remain in cold storage, under regulated custody and compliant with MiCA.” The security of digital assets is indeed one of the cornerstones of BitGo’s offering. All services are designed to ensure that investors’ assets remain protected in regulated cold storage, minimizing operational and counterparty risks.…
Share
BitcoinEthereumNews2025/09/18 04:28
Stripe-Owned Bridge Wins Conditional OCC Approval to Become National Crypto Bank

Stripe-Owned Bridge Wins Conditional OCC Approval to Become National Crypto Bank

Bridge advances toward federal banking status as regulators implement new US stablecoin rules under the GENIUS Act. The post Stripe-Owned Bridge Wins Conditional
Share
Cryptonews AU2026/02/18 14:40
Nasdaq-listed crypto treasury GD Culture to add 7,500 BTC after Pallas Capital acquisition closes

Nasdaq-listed crypto treasury GD Culture to add 7,500 BTC after Pallas Capital acquisition closes

Those tokens are worth around $876 million at current prices, making GDC among the top 15 largest publicly traded bitcoin holders.
Share
Coinstats2025/09/18 04:19