When large models start consuming enterprise data, SaaS is no longer selling software—it’s selling whether customers dare entrust their lives to you.
Article author and source: NiuTouShe
In recent high-frequency closed-door discussions, NiuTouShe identified a highly contradictory phenomenon: at major ecosystem conferences, founders of enterprise services are loudly proclaiming “fully embrace large models”; yet in private, they are kept awake at night by a soul-crushing anxiety— if we unreservedly feed our clients’ CRM customer lists, ERP financial records, and HR payroll data into the foundational large models of big tech companies, won’t our decade-long business barriers be completely exposed?
Big companies' business teams confidently assure: "We absolutely won't use customer data for training—it's deleted after use." But in today's era of invisible, intangible large model "black boxes," such promises, relying solely on moral constraints, are extremely fragile in the face of real commercial interests.
A silent battle over the ownership of enterprise core ledgers has already begun.
Beware of the squeeze
To understand this game, you first need to identify what general-purpose large models are truly lacking.
Large models from big tech companies are like high-scoring but practically inept exam takers—they can write beautiful poetry and code, but they’re completely lost when faced with real-world business scenarios. They don’t understand how to recalculate the entire production line’s costs when raw material prices rise in manufacturing, nor do they grasp the intricate commission and rebate schemes unique to different regions in chain supermarkets. These invaluable “industry-specific insights” are all buried within the databases of vertical SaaS providers.
For large models to become smarter and command higher prices, they must “consume” this data. How? By connecting via open APIs to SaaS systems, large models initiate an extremely subtle process of “learning from others”:
Step one: Retrieve data. The large model uses an API to pull high-value data—such as cost details and sales discounts—from the SaaS system directly into its memory.
Step two: Analyze the accounting. Leveraging extremely powerful computing power, the large model rapidly compares this data and delivers precise business insights.
Step three: Internalization of experience (the most frightening step). After calculating the numbers, the big company indeed keeps its promise and deletes your original transaction data. However, during this process, the large model has fully learned the cost volatility patterns and unique business practices of your industry!
The large model didn’t steal your plaintext data, but it stole the “experienced TCM doctor’s diagnostic wisdom” hidden behind those numbers. After just a few API calls, the ten-year competitive advantage painstakingly built by SaaS providers has been quietly absorbed as the core intelligence of big tech companies. This is an extremely lethal dimensional attack.
Loss of trust
This isn't the worst part. The worst is that when you open the door to large models, you've already touched the raw nerve of your livelihood—the clients who pay you.
Large and medium-sized state-owned and private enterprises in China have data security sensitivity ingrained in their DNA. In the past, to ensure confidentiality, bosses were eager to lock their servers in their company’s basement. Now, if you tell them, “Boss, we’ve integrated with a major tech company’s public cloud large model—our system will become much smarter,”
Customers won’t be pleased—they’ll be stunned: Does this mean the company’s core procurement base prices, executives’ true salaries, and major clients’ renewal rates will all be sent over the public internet to be processed by external large-scale servers?
Once you cross the line on data security, customers will blame you—the SaaS provider—not the large model vendor. SaaS companies are caught in the middle, squeezed from both sides: not integrating AI makes your system seem outdated and unsellable; integrating big tech AI makes customers feel insecure, fearing potential breaches, contract violations, or even lawsuits. In highly conservative industries like finance, healthcare, and manufacturing, this is essentially a death sentence.
The SaaS Counterattack
To protect client trust and their own livelihoods, savvy enterprise service veterans have begun to awaken, deploying three highly robust defensive measures—all with one core goal: preventing theft of knowledge and leaks.
First tactic: Provide only the conclusion, not the process (business black box)
Vertical SaaS vendors are beginning to hold back on large models.
When a large model requests data, never provide the underlying detailed transaction records. The SaaS system internally processes and calculates those complex, sensitive accounts, then delivers only a “de-identified final conclusion” to the large model. Here, the large model serves merely as a messenger; the SaaS remains the true decision-making brain. This directly severs the large model’s pathway to learning expertise from sensitive domains.
Second strategy: Deploy AI computing power directly in the client’s data center (on-premises deployment)
This is the most fundamental way to resolve the crisis. Since transmitting data to public cloud is unsafe, simply don’t transmit it at all. SaaS providers no longer rely on massive external models worth hundreds of billions; instead, they adopt lightweight models in the tens of billions, fine-tune them specifically, and directly package and install them on the client’s own servers—or even on the boss’s computer. Replace “data going to the cloud” with “computing power going to the ground.” Once the network cable is unplugged, data is 100% physically isolated, ensuring that “the meat rots in your own pot.”
Third tactic: Poisoning data and adding tags (anti-counterfeiting and tracking)
In certain scenarios where it’s necessary to feed data to large models, tech enthusiasts have begun embedding invisible, business-irrelevant unique markers into the output data streams. If, in the future, a major company’s large model is found using your exclusive business logic when answering others’ questions, checking for these markers provides definitive proof that the company secretly used your data for training.
Repricing
Under the impact of large models, the way companies calculate the cost of buying software has changed completely.
In the past, clients chose software primarily based on whether it had comprehensive features and an attractive interface. But in the future, as business leaders become more aware of data sovereignty, their primary criterion for purchasing will be: “Can you swear that my data will absolutely not leak?”
This means that "absolute security" will become the most scarce and valuable selling point in the enterprise services sector in the future.
Lightweight SaaS platforms that lack their own technological闭环 and can only act as “megaphones” for large models will quickly lose the favor of major clients. In contrast, vendors capable of deploying AI computing power directly into clients’ data centers and resolving complex accounting issues within their own systems can not only regain absolute trust from their clients but also confidently reclaim high product pricing power—even amid intense industry-wide price wars.
Revealing your hand is a dead end.
Without addressing at the foundational level the core issues of “whose data belongs to whom” and “why should the boss trust you,” the so-called thriving AI ecosystem will forever remain on the periphery, never touching the core businesses of large enterprises.
In this three-way struggle involving big tech companies, SaaS providers, and clients, no party will voluntarily give up what they hold. Big tech’s ambition for data will not cease, and clients’ demand for absolute security will not be compromised. For SaaS providers, the notion that submitting data voluntarily in exchange for traffic from giants is a viable strategy is a dead end.
In this era, the law of the jungle is brutally simple: where your data is protected, there lies your moat. In this invisible war, safeguarding your data底线 is safeguarding the very life of your company.
