Odaily Planet News: Recently, the decentralized AI computing power network Gonka explained its phased adjustments to the PoC mechanism and model operation methods during a community AMA. The main adjustments include: unifying the use of a single large model for both PoC and inference, changing the PoC activation method from delayed switching to near real-time triggering, and optimizing the computing power weight calculation method to better align with the actual computational costs of different models and hardware.
Co-founder David stated that the aforementioned adjustments are not aimed at short-term outputs or individual participants, but rather a necessary evolution of the consensus and validation structure as the network's computational capacity rapidly scales. The goal is to enhance the network's stability and security under high load conditions, laying the foundation for supporting larger-scale AI workloads in the future.
Regarding the issue of small models generating a high number of tokens mentioned in community discussions, the team pointed out that there are significant differences in actual computational resource consumption among models of different sizes when producing the same number of tokens. As the network evolves toward higher computational density and more complex tasks, Gonka is gradually aligning the computational power weights with the actual computational costs, in order to avoid long-term imbalances in the computational structure, which could affect the network's overall scalability.
Under the latest PoC mechanism, the network has reduced the PoC activation time to within 5 seconds, minimizing computational waste caused by model switching and waiting, and allowing a higher proportion of GPU resources to be used for effective AI computation. At the same time, by unifying model execution, the system overhead caused by switching between consensus and inference is reduced, improving the overall computational efficiency.
The team also emphasized that single-GPU and mid-to-small scale GPU resources can continue to generate revenue and participate in governance through methods such as collaborative mining pools, flexible participation by epoch, and inference tasks. Gonka's long-term goal is to support the coexistence of different levels of computing power within the same network through evolving mechanisms.
Gonka stated that all key rule adjustments have been advanced through on-chain governance and community voting. In the future, the network will gradually support more model types and AI task formats, providing continuous and transparent participation opportunities for GPUs of all sizes globally, and promoting the long-term healthy development of decentralized AI computing infrastructure.
