Artificial intelligence (AI) is revolutionizing many sectors by enhancing data processing and decision-making capabilities beyond human limits. However, as AI systems become more sophisticated, they also become more opaque, raising concerns about transparency, trust and fairness.
The “black box” nature common to most AI systems often leaves stakeholders questioning the origins and reliability of AI-generated outputs. In response, technologies such as Explainable AI (XAI) have emerged to demystify AI operations, but often fall short of fully unraveling their complexities.
As the complexity of AI continues to evolve, so too does the need for robust mechanisms to ensure these systems are not only effective, but also trustworthy and fair. Enter blockchain technology, which is known to play a key role in enhancing security and transparency through decentralized record-keeping.
Beyond ensuring the security of financial transactions, blockchain has the potential to build a previously unattainable layer of verifiability into AI operations. It has the potential to address some of AI's most persistent challenges, such as data integrity and decision traceability, making it a key element in the pursuit of transparent and trustworthy AI systems.
Chris Feng, COO of Chainbase, shared his thoughts on the matter in an interview with crypto.news, where he said that while blockchain integration doesn't directly solve every aspect of AI transparency, it does enhance some key areas:
Can blockchain technology actually make AI systems more transparent?
Blockchain technology does not solve the fundamental problem of the explainability of AI models. It is important to distinguish between interpretability and transparency. The main reason for the lack of explainability in AI models is the black box nature of deep neural networks. Although we can understand the inference process, we cannot understand the logical meaning of each parameter involved.
So how does blockchain technology increase transparency in a way that is different from the increased interpretability offered by technologies such as IBM’s Explainable AI (XAI)?
In the context of Explainable AI (XAI), various methods, such as uncertainty statistics and analysis of model outputs and gradients, are employed to understand its functionality. However, integrating blockchain technology does not change the internal reasoning and training methods of an AI model and does not improve its interpretability. However, blockchain can improve the transparency of training data, procedures, and causal inference. For example, blockchain technology allows us to track the data used to train a model and incorporate community input into the decision-making process. All of these data and procedures can be securely recorded on the blockchain, thereby improving the transparency of both the AI model building process and the inference process.
Given the widespread issue of bias in AI algorithms, how effective will blockchain be in ensuring data provenance and integrity throughout the AI lifecycle?
Current blockchain methods have demonstrated great potential in securely storing and serving training data for AI models. Utilizing distributed nodes enhances confidentiality and security. For example, Bittensor adopts a distributed training approach that distributes data across multiple nodes and implements algorithms to prevent fraud among nodes, making distributed AI model training more resilient. In addition, protecting user data during inference is crucial. For example, Ritual encrypts data before distributing it to off-chain nodes for inference computation.
Are there any limitations to this approach?
A notable limitation is monitoring for model bias resulting from training data. Specifically, identifying gender- or race-related bias in model predictions arising from training data is often ignored. Currently, neither blockchain technology nor AI model debiasing methods effectively target and eliminate bias through explainability or debiasing techniques.
Do you think blockchain can increase transparency in the validation and testing phase of AI models?
Companies like Bittensor, Ritual, and Santiment are leveraging blockchain technology to connect on-chain smart contracts with off-chain computing capabilities. This integration enables on-chain inference, ensures transparency of data, models, and computing power, and increases transparency throughout the process.
What do you think is the best consensus mechanism for a blockchain network to validate AI decisions?
I personally advocate integrating Proof of Stake (PoS) and Proof of Authority (PoA) mechanisms. Unlike traditional distributed computing, the AI training and inference process requires consistent and stable GPU resources over a long period of time. Therefore, it is essential to verify the validity and reliability of these nodes. Currently, reliable computing resources are mainly stored in data centers of various scales. This is because consumer-grade GPUs may not be able to fully support AI services on the blockchain.
Looking forward, what creative approaches and advancements in blockchain technology do you foresee being key in overcoming current transparency challenges in AI, and how might these change the landscape of trust and accountability in AI?
I believe that current blockchain-based AI applications have several challenges, such as addressing the relationship between model debiasing and data, and leveraging blockchain technology to detect and mitigate black-box attacks. I am actively exploring ways to incentivize the community to conduct experiments on model interpretability and increase the transparency of AI models. In addition, I also consider how blockchain can turn AI into a true public good. Public goods are defined by transparency, social benefits, and contributions to the public interest. However, current AI technologies often exist between experimental projects and commercial products. Adopting blockchain networks that incentivize and distribute value could promote the democratization, accessibility, and decentralization of AI. This approach could enable actionable transparency and increase trust in AI systems.