Why Diligent Workers Are Most Vulnerable to AI Replacement

icon MarsBit
Share
Share IconShare IconShare IconShare IconShare IconShare IconCopy
AI summary iconSummary

expand icon
Altcoins to watch are gaining attention as the Fear and Greed Index reveals heightened market anxiety. Diligent workers are increasingly at risk of being replaced by AI, particularly those who meticulously document their work. Systems like Feishu and DingTalk generate vast datasets that AI can easily learn from. The trend of 'colleague.skill' underscores how AI replicates human behavior, raising concerns about job security and ethical implications. As digitalization expands, so does the threat to roles dependent on structured, repetitive tasks.

Unfortunately, in this era, the more diligently and unreservedly you work, the more likely you are to accelerate your own reduction into a skill that AI can replace.

Over the past couple of days, social media trending lists and media channels have been flooded with “colleague.skill.” As this incident continues to gain momentum across major social platforms, public attention has been overwhelmingly drawn to broader anxieties about “AI layoffs,” “capital exploitation,” and the “digital immortality of workers.”

These are certainly stressful, but what worries me the most is a recommendation written in the project's README:

The quality of raw materials determines the quality of skill: prioritize collecting his long-form written content > decision-related responses > casual messages.

Those who work the hardest are precisely the ones most perfectly distilled and pixel-perfectly replicated by the system.

They are the ones who, after every project concludes, still sit down to write detailed retrospectives; those who, when disagreements arise, take the time to type out lengthy messages in the chat, honestly laying out their decision-making logic; and those who are deeply responsible, entrusting every detail of their work meticulously to the system.

Carefulness, once the most revered workplace virtue, has now become a catalyst accelerating workers' transformation into AI fuel.

Burned-out workers

We need to reconsider a word: context.

In everyday contexts, context is the backdrop of communication. But in AI—especially in the world of rapidly evolving AI agents—context is the fuel that powers the engine, the blood that sustains the pulse, and the only anchor that allows models to make precise judgments amid chaos.

An AI disconnected from context, no matter how impressive its parameter count, is merely a search engine afflicted with amnesia. It cannot recognize who you are, cannot perceive the hidden currents beneath business logic, and has no way of knowing the prolonged struggles and trade-offs you’ve endured on the intricate network of resource constraints and interpersonal dynamics when making a decision.

The reason "colleague.skill" has stirred such a massive reaction is because it coldly and precisely targeted the mine storing vast amounts of high-quality context—modern enterprise collaboration software.

Over the past five years, China’s workplace has undergone a quiet yet profound digital transformation. Tools like Feishu, DingTalk, and Notion have become vast enterprise knowledge bases.

Using Feishu as an example, ByteDance has publicly stated that the volume of documents generated internally each day is enormous, and these dense strings of characters faithfully capture every brainstorming session, every heated meeting debate, and every painful strategic compromise made by over 100,000 employees.

This digital penetration far exceeds that of any previous era. Once, knowledge was alive—with warmth, nestled in the minds of veteran employees, drifting through casual chats in the break room; now, all human wisdom and experience are forcibly stripped of their vitality and coldly stored in the server arrays of the cloud.

In this system, if you don’t document your work, it remains invisible, and new colleagues cannot collaborate with you. The efficient operation of modern enterprises is built upon the daily cycle in which every employee contributes context to the system.

Serious workers, driven by diligence and goodwill, openly share their thought processes on these impersonal platforms. They do so to make the team’s gears mesh more smoothly, to strive to prove their worth to the system, and to desperately carve out a place for themselves within this complex commercial behemoth. They are not willingly surrendering themselves—they are simply clumsily yet earnestly adapting to the survival rules of the modern workplace.

Yet it is precisely this context, left behind for human collaboration, that serves as the perfect fuel for AI.

Feishu’s admin console includes a feature that allows super administrators to bulk export members’ documents and communication records. This means that the project retrospectives and decision-making logic you spent three years crafting through countless late nights can, in just a few minutes, be effortlessly bundled into a cold, lifeless compressed file via a single API call.

When a person is reduced to an API

With the popularity of "colleague.skill," unsettling derivatives have begun appearing on GitHub's Issues section and various social media platforms.

Someone created the “Ex.skill,” feeding AI with years of WeChat chat logs so it could continue arguing with or comforting them in that familiar tone; someone else developed the “Idealized Love.skill,” reducing an unattainable flutter of emotion to a cold interpersonal simulation, repeatedly rehearsing试探 phrases in a calculated pursuit of the optimal emotional outcome; and yet another crafted the “Paternalistic Boss.skill,” mentally digesting oppressive PUA language in the digital space to build themselves a tragic psychological defense.

Tacit knowledge

The use of these skills has long since moved beyond the realm of work efficiency. Without realizing it, we have become adept at applying cold, instrumental logic to dismantle and objectify living, breathing human beings.

The German philosopher Martin Buber once proposed that the foundation of human relationships consists of only two fundamentally different modes: "I-Thou" and "I-It."

In the encounter between "I and Thou," we transcend prejudice and regard each other as complete, dignified beings. This bond is openly敞开, full of vibrant unpredictability, and precisely because of its sincerity, it is especially fragile; yet once we fall into the shadow of "I and It," living people are reduced to objects that can be deconstructed, analyzed, and labeled. In this purely utilitarian gaze, the only thing we care about becomes: "What use is this thing to me?"

The emergence of products like "Former.skill" signifies that instrumental rationality—'me and it'—has fully invaded the most intimate realms of emotion.

In a real relationship, people are multidimensional and full of nuances, constantly shifting with contradictions and rough edges; their reactions change based on specific situations and emotional interactions. Your ex may respond very differently to the same sentence upon waking in the morning versus after working late at night.

But when you distill a person into a skill, what you strip away is merely the residual function that happened to be "useful" to you within that specific bond—the part that could "produce utility" for you. In this cruel purification, the once warm, living individual, with their own joys and sorrows, has their soul completely drained, transformed into a mere "functional interface" that you can plug in or unplug at will.

It must be acknowledged that AI did not invent this chilling coldness out of thin air. Long before AI emerged, we had already grown accustomed to labeling others and precisely measuring the “emotional value” and “network weight” of every relationship. For instance, we quantify people’s attributes into tables on dating platforms; in the workplace, we categorize colleagues as “hard workers” or “slackers.” AI merely made explicit what was once an implicit, interpersonal functional extraction.

A person is flattened, leaving only the slice that asks, “What use is this to me?”

Digital patina

In 1958, Hungarian-British philosopher Michael Polanyi published "Personal Knowledge," in which he introduced the highly insightful concept of tacit knowledge.

Polanyi made a famous assertion: "We know more than we can tell."

He gave the example of learning to ride a bicycle. A skilled rider, gliding effortlessly with the wind, perfectly maintains balance with every tilt of gravity, yet cannot precisely convey the subtle bodily intuition of that moment using dry physics formulas or pale words. He knows how to ride, but he cannot explain it. This kind of knowledge that cannot be encoded or articulated is called tacit knowledge.

The workplace is filled with this kind of tacit knowledge. A senior engineer, when diagnosing system failures, might pinpoint the issue with just a glance at the logs—but finding it nearly impossible to document the “intuition” built from thousands of trial-and-error experiences. A skilled salesperson, falling silent mid-negotiation, creates a sense of pressure and perfect timing that no sales manual can capture. An experienced HR professional, during an interview, can detect inconsistencies in a candidate’s resume from just a half-second avoidance of eye contact.

「Colleague.skill」 can only extract explicit knowledge that has been written down or spoken aloud. It can capture your reflection documents, but not the doubts you experienced while writing them; it can replicate your decision responses, but not the intuition behind your decisions.

What the system distills is always just a shadow of a person.

If the story ended here, it would merely be another clumsy imitation of humanity by technology.

But when a person is distilled into a skill, that skill does not remain static. It is used to reply to emails, write new documents, and make new decisions. In other words, these AI-generated shadows begin to create new contexts.

These AI-generated contexts will be stored in Feishu and DingTalk, becoming training material for the next round of distillation.

Back in 2023, research teams from the University of Oxford and the University of Cambridge jointly published a paper on "model collapse." The study found that when AI models are iteratively trained on data generated by other AI systems, the distribution of the data becomes increasingly narrow. Rare, marginal, yet profoundly authentic human traits are rapidly erased. After just a few generations of synthetic data training, models completely forget the long-tail, complex real-world human data and instead produce highly mundane and homogenized outputs.

In 2024, Nature published a research paper indicating that training future generations of machine learning models on AI-generated datasets will severely pollute their outputs.

Tacit knowledge

It’s like those meme images circulating online—originally a high-resolution screenshot, passed around and compressed countless times. Each share loses some pixels and adds more noise, until the image becomes blurry and covered in digital patina.

When authentic human context infused with tacit knowledge is exhausted, and the system can only train itself on tarnished shadows, what will ultimately remain?

Who is erasing our traces?

What remains is nothing but correct but meaningless statements.

When the river of knowledge dries up into an endless cycle of AI chewing on AI, everything the system produces will become extremely standardized and extremely safe—but also hopelessly hollow. You’ll see countless perfectly structured weekly reports and flawless emails, yet devoid of any human spirit and lacking any truly valuable insight.

This great collapse of knowledge is not because human brains have become less intelligent; the true tragedy is that we have outsourced the right to think and the responsibility to preserve context to our own shadows.

A few days after 「colleague.skill」 went viral, a project named 「anti-distill」 quietly appeared on GitHub.

The author of this project did not attempt to attack large models or issue any grand declarations. He simply provided a small tool to help workers automatically generate long, seemingly reasonable but actually full of logical noise, ineffective texts in Feishu or DingTalk.

His goal was simple: hide his core knowledge before the system distilled it. Since the system favored scraping long, actively written articles, he fed it a pile of meaningless gibberish.

This project didn’t go viral like “Colleague.skill”; in fact, it seems small and insignificant. Using magic to defeat magic still operates within the established rules set by capital and technology. It cannot reverse the growing trend of systems becoming increasingly reliant on AI and increasingly disregarding real human beings.

But this does not prevent the project from becoming the most tragically poetic and profoundly metaphorical scene in the entire absurd play.

We work tirelessly to leave traces in the system, write detailed documentation, and make careful decisions, trying to prove our existence and value within this vast modern corporate machine—unaware that these very earnest marks will ultimately become the eraser that wipes us away.

But on the other hand, this isn't necessarily a complete dead end.

Because what that eraser wipes away is always the "you of the past." A skill packaged as a file—no matter how sophisticated its extraction logic—is fundamentally just a static snapshot. It is frozen at the moment of export, reliant on outdated inputs and trapped in endless loops within predefined processes and logic. It lacks the instinct to confront the unknown chaos, and it has no capacity to evolve through real-world setbacks.

When we let go of those highly standardized, rigidly established experiences, we free up our own hands. As long as we continue to reach outward and constantly break down and rebuild the boundaries of our understanding, that shadow lingering in the clouds will forever be forced to follow in our footsteps.

Humans are fluid algorithms.

Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information. Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.