To achieve decentralization of artificial intelligence, decentralization is the best way!

   The intersection of Web3 and artificial intelligence (AI) has become one of the hottest debate topics in the crypto community. After all, generative AI is revolutionizing all areas of the traditional software stack, and Web3 is no exception. Given that decentralization is the core value proposition of Web3, many emerging Web3 generative AI projects and scenarios project some form of decentralized generative AI value proposition.

    In Web3, we have a long history of looking at every field from a decentralized perspective, but the reality is that while every field has a series of decentralization scenarios, not all fields can benefit from decentralization. For example, artificial intelligence.

    Artificial intelligence is digital knowledge, and knowledge may be the largest structure in the digital world and deserves decentralization. Throughout the history of Web3, we've made many attempts to decentralize things that worked really well in centralized architectures, and decentralization didn't provide obvious benefits.

    From a technical and economic perspective, knowledge may not be one of the natural candidates for decentralization, because currently large AI providers have a huge advantage in knowledge control, and it has even become terrifying. The development of AI is no longer linear or exponential. , instead follows a multi-exponential curve.

   GPT-4 represents a huge improvement over GPT3.5 in many ways, and this trajectory is likely to continue. At some point, trying to compete with centralized AI providers becomes unfeasible. A well-designed decentralized network model enables an ecosystem in which all parties collaborate to improve the quality of the model, enabling democratic access to knowledge and sharing of benefits.

    Furthermore, transparency is a second factor that can be considered when evaluating the merits of decentralization in AI. The underlying model architecture involves millions of interconnected neurons across multiple layers, making it impractical to understand using traditional monitoring practices. No one really knows what’s going on inside GPT-4, and OpenAI has no incentive to be more transparent in this area. Decentralized AI networks can enable open testing benchmarks and guardrails, providing visibility into underlying model functionality without the need to trust a specific provider.

    If the case for decentralized AI is so clear, why haven’t we seen any successful attempts in this space? Indeed, although the concept of decentralized artificial intelligence began to emerge as early as the 1990s, no large-scale successful attempts have been seen in practice, and this can be attributed to a variety of factors.

    Before large models entered the scene, the dominant architectural paradigm was different forms of supervised learning, requiring highly curated and labeled datasets that primarily resided within corporate boundaries. Furthermore, these models are small enough to be easily interpreted using mainstream tools. Finally, the case for control is also very weak, with a lack of models strong enough to warrant concern.

    Therefore, at the turning point in the evolution of basic models to large models, we began to realize that artificial intelligence needs to be decentralized and different from previous attempts. Now we can start to consider which specific elements need to be decentralized to enable more equitable, sustainable and transparent AI development. This requires in-depth thinking and research, involving many aspects such as technology, policy and ethics.

    When it comes to generative artificial intelligence, there is no single decentralized approach. Instead, decentralization should be considered in the context of the different stages of the underlying model’s life cycle.

    First, decentralized computing does play a key role in the pre-training and fine-tuning phases. By establishing a decentralized GPU computing network, different parties can jointly participate in model training and optimization, reducing the monopoly control of the basic model by cloud service providers.

    Secondly, data decentralization is also very important, especially in the pre-training and fine-tuning stages. There is currently a lack of transparency into the exact composition of the datasets used to train the underlying models. Building a decentralized data network would enable all parties to transparently provide data and track its use in model training.

    Third, validation is another important stage in the life cycle of the underlying model, which requires human intervention and human feedback. Establishing a decentralized human and AI verification network that can perform specific tasks and trace the results will increase the transparency of the verification process, thus enabling further improvements in the field.

    Finally, it would be an interesting challenge to build a network that can distribute inference workloads, reduce reliance on infrastructure controlled by a centralized method, and bring greater flexibility and reliability to the adoption of underlying models.

Summarize

    With the rapid development of artificial intelligence, we are faced with an urgent question: How to achieve decentralization of artificial intelligence to promote fair, sustainable and transparent development? In solving this problem, decentralization is considered the best way to promote innovation, diversity and transparency, protect data privacy, and promote the development of artificial intelligence in a more just and sustainable way.

    However, achieving this goal requires overcoming technical, economic, and ethical challenges. It requires interdisciplinary research and collaboration, and the formulation of reasonable policies and mechanisms to ensure that decentralized artificial intelligence can truly benefit society.

Guess you like

Origin blog.csdn.net/LinkFocus/article/details/133213604