Opinion: Pro-Social AI: Malaysia’s Hidden Competitive Advantage

AI For Business


The fastest path to future-proof Malaysian business is not more or less artificial intelligence (AI), but different AI. Business leaders rarely disagree that AI will reshape competitive advantage. But what exactly are we optimizing for?

We have a bad habit of falling in love with numbers. Call duration, click-through rate, and engagement score. It looks clean, objective, and fits nicely into spreadsheets. But anyone who has worked within a large organization knows the secret. When numbers become a target, people stop telling the truth.

Let’s take a call center as an example. Shorter calls should be more efficient. But they often mean staff force people off the phone abruptly, problems are half-resolved, and customers call back even more upset. Improve your metrics. This service is not. We often say, “Measure what you care about.” Now focus on what is easiest to measure. If it cannot be counted, it will be ignored. Judgment, trust, quality, these things are so cluttered that they trump what fits on a dashboard.

Now add AI to the mix. AI systems are built to maximize a single score. Give them a number and they will chase it with relentless focus. They don’t understand intentions, values, and common sense. They understand that you will win any game you set. Therein lies the danger. While AI can skyrocket your engagement graph, it can also worsen the quality of your product. The real risk of AI is not that it will fail. It’s about being too successful and optimizing for the wrong things before we realize the damage.

By replacing judgment with numbers, AI won’t save us from bad decisions. It automates them. Malaysian businesses need a more operational framework. ProSocial AI is a system designed and deployed to deliver tangible benefits to people and the planet while remaining bankable and competitive.

The hidden costs of being “smart”: When optimization becomes moral outsourcing

In the boardroom, the phrase “responsible AI” often sounds like legal cover. ProSocial AI flips the script. Treat responsibility as smart risk management and a way to build better, more valuable systems. why? Because AI isn’t just about automating tasks. Automate selection. Which customers receive credit, insurance, and employment approvals. Which patients are prioritized in the triage system? Which communities have access to public services? Which stories get spread through recommendation engines.

Deploying AI without clear prosocial goals quietly delegates moral decisions to crude metrics that are easiest to code. Ethical dilemmas persist. It is simply simplified and eventually replaced by numbers. It may result in “success” on paper, but it can create failure in the real world. These include reputational headwinds, regulatory crackdowns, attrition of talent, and fragile systems that break down under stress.

ProSocial AI starts with a different question: “Success for whom?” What is the time horizon? What are the broader implications beyond the balance sheet?

Why is this important for Malaysia now?

Malaysia is actively promoting AI adoption through the National AI Roadmap 2021-2025. At the same time, Malaysia’s National Sustainability Reporting Framework (NSRF) is being aligned with International Sustainability Standards Board (ISSB)-style disclosures, strengthening corporate accountability through momentum in sustainability reporting. The strategic link that many companies miss is that while AI may improve short-term efficiency, the cost will come later if it undermines trust, equity, and the health of the planet. It manifests itself in increased regulation, lost contracts, higher costs of capital, supply chain troubles, and weakened brands.

ProSocial AI does the opposite. It turns AI investments into a defensible narrative. We are building systems that drive profitability while strengthening public trust and the long-term resilience of people, planet and profitability.

This is particularly important in ASEAN supply chains, where buyers are increasingly demanding verifiable data on emissions, labor practices and governance. A 2024 McKinsey study found that 68% of global procurement teams require ESG (environmental, social, and governance) metrics from their suppliers. ProSocial AI is not a feel-good subject. It’s a market access strategy.

From “ethical AI” to “prosocial AI”: Principles for demonstration

One useful distinction that emerges in the prosocial discussion is the move from principles to proofs. Ethical AI often asks, “Is this fair?” And we get stuck in a debate about definitions. ProSocial AI asks the harder question, “What specific prosocial outcomes does this system deliver, and how do we measure and audit them?”

ProSocial AI Index uses customization, training, testing, and targeted 4T framing to move your team from sloganeering to assessment. Regardless of whether you adopt that exact framework, the core workings of your business are the same. It’s about treating social and global outcomes as first-class performance indicators rather than public relations tools.

What ProSocial AI looks like in the enterprise

For CEOs, ProSocial AI means AI systems that enhance human agency, reduce systemic harm, and strengthen long-term resilience while maintaining economic viability. In practice, this means four design decisions:

1. Write “intent specifications” before technical specifications: Not “build a chatbot”. Instead, it aims to “improve complaint resolution times without increasing vulnerable customer churn or misinformation.” When RHB Malaysia Bank introduced an AI customer service system in 2023, it incorporated customer satisfaction metrics along with resolution speed, recognizing that rapid responses could harm vulnerable customers who require human assistance.

2. Measuring quadratic effects: If AI reduces costs but increases the burden on existing staff or causes employee burnout, that’s not efficiency; it’s cost volatility. A Penang manufacturing company found that an AI scheduling system reduced overtime costs by 22%, but increased worker injury rates by 14% due to compressed work patterns. The actual costs were higher than the apparent savings.

3. Built with auditability in mind: Suppose you need to explain your results to regulators, partners, and your own employees. If you can’t track your decisions, you can’t defend them. Singapore’s Model AI Governance Framework provides useful guidance that Malaysian businesses can adapt to.

4. Treat planetary health as an operational reality and an internal priority: AI expands the footprint of computing, energy demand, water usage, and procurement. ProSocial design requires asking, “What is the bottom line impact?” Training a single large-scale language model can produce as much as five cars worth of carbon dioxide emissions over its lifetime. This naturally aligns with the trajectory of sustainability disclosure.

ProSocial AI as a bias reduction strategy

Behavioral scientists have been warning us about WYSIATI for decades. Only what can be seen exists. We make decisions based on the information in front of us and ignore what is outside the frame. What we typically see in AI projects is model accuracy, speed, and return on investment. These numbers are mesmerizing. They appear accurate, objective, and reassuring. What we don’t see until later is the drift, exclusion, illusion, manipulation, and wider ramifications. By the time they surface, the system is already hardwired and expensive to undo.

ProSocial AI is a structured antidote to this blind spot. It forces organizations to think slower and more carefully at the very moments when they are most tempted by quick wins. This forces leaders to look beyond prediction and performance to disciplined influence with clear strategic direction. And turn vague aspirations like “AI for good” into solid business capabilities, with social license as a competitive advantage rather than reputational risk.

Practical point: A-frame

consciousness: List your top four AI use cases and write actual goals for each, not proxy metrics. Identify what might be unintentionally optimized. “Reducing call time” can become “getting the customer off the phone in a hurry.”

thanks: Map who benefits and who pays. This includes employees, suppliers, vulnerable customers, regulators and communities. If you can’t say the name of the “payer,” you probably haven’t met them.

consent: Accept that AI is not neutral and that governance is part of performance. If you don’t set the intent, the objective function will set it for you.

accountability: Select two auditable outcome metrics for each AI system. One is business metrics (margins, retention) and the other is pro-social metrics (complaint fairness, accessibility, verified emissions impact). Both are reviewed quarterly at exco level.

Dr. Cornelia C. Walther is a humanitarian practitioner with over 20 years of experience at the United Nations, driving social change across global contexts. She is a Senior Research Fellow at the Sunway Center for Planetary Health, the Wharton School of the University of Pennsylvania, and the Harvard University Learning and Innovation Lab, where her research focuses on hybrid intelligence and prosocial AI. She is an advisor to UNFPA on Hybrid Intelligence and the European Policy Center on Prosocial AI. We are also collaborating with Sunway’s ProSocial AI Global Strategy and Competitiveness Center in our Malaysian business. Through the global POZE Alliance, she is pioneering research and practice to strengthen human agency in the age of AI.



Source link