By Tan Sri Lee Lam Thye, chairman, Alliance For A SAFE Community
KOTA KINABALU: In an era where artificial intelligence (AI) is rapidly shaping every facet of human life, it is critical that we ensure this powerful technology is developed and deployed with a human-centric approach.
AI holds the potential to solve some of humanity’s most pressing challenges, from healthcare innovations to environmental sustainability, but it must always serve the greater good.
To humanise AI is to embed ethical considerations, transparency, and empathy into the heart of its design.
We must remember that AI is not just a tool; it reflects the values of those who create it.
Therefore, AI development should prioritize fairness, accountability, and inclusivity. This means avoiding bias in decision-making systems, ensuring that AI enhances human potential rather than replacing it, and making its benefits accessible to all, not just a select few.
Governments, industries, and communities must work together to create a governance framework that fosters innovation while protecting privacy and rights.
We must also emphasize the importance of educating our workforce and future generations to work alongside AI, harnessing its capabilities while maintaining our uniquely human traits of creativity, compassion, and critical thinking.
As artificial intelligence (AI) continues to transform the way we live, work, and interact, it is becoming increasingly urgent to ensure that its development and use are grounded in responsibility, accountability, and integrity.
The Alliance for a Safe Community calls for clear, forward-looking regulations and a comprehensive ethical framework to govern AI usage to safeguard the public interest.
The Need for AI Regulation
AI technologies are rapidly being adopted across sectors—from healthcare and education to finance, law enforcement, and public services. While these advancements offer significant benefits, they also pose risks, including:
• Invasion of privacy and misuse of personal data;
• Algorithmic bias leading to discrimination or injustice;
• Job displacement and economic inequality;
• Deepfakes and misinformation;
Without proper regulation, AI could exacerbate existing societal challenges and even introduce new threats.
There must be checks and balances to ensure that AI serves humanity and does not compromise safety, security, or fundamental rights.
We propose the following elements as part of a robust regulatory framework:
- AI Accountability Laws – Define legal responsibility for harm caused by AI systems, especially in high-risk applications.
- Transparency and Explainability – Mandate that AI decisions affecting individuals (e.g., in hiring, credit scoring, or medical diagnoses) must be explainable and transparent.
- Data Protection and Privacy Standards – Strengthen data governance frameworks to prevent unauthorized access, misuse, or exploitation of personal data by AI systems.
- Risk Assessment and Certification – Require pre-deployment risk assessments and certification processes for high-impact AI tools.
- Public Oversight Bodies – Establish independent agencies to oversee compliance, conduct audits, and respond to grievances involving AI.
The Call for a Code of Ethics and Integrity in AI
Technology alone cannot determine what is right or just. We must embed ethical principles into every stage of AI development and deployment. A Code of Ethics should include:
Human-Centric Design – AI must prioritize human dignity, autonomy, and well-being.Non-Discrimination and Fairness – AI systems must not reinforce or amplify social, racial, gender, or economic bias.
Integrity and Honesty – Developers and users must avoid deceptive practices and be truthful about AI capabilities and limitations.
Environmental Responsibility – Developers should consider the energy and environmental impact of AI technologies.
Collaboration and Inclusivity – The development of AI standards must include voices from all segments of society, especially marginalized communities.
Conclusion
Artificial intelligence is one of the most powerful tools of our time. Like any powerful tool, it must be handled with care, guided by laws, and shaped by ethical values. We urge policymakers, tech leaders, civil society, and global institutions to come together to build a framework that ensures AI is safe, inclusive, and used in the best interest of humanity.
The future of AI should not be one where technology dictates the terms of our humanity. Instead, we must chart a course where AI amplifies our best qualities, helping us to live more fulfilling lives, build fairer societies, and safeguard the well-being of future generations. Only by humanising AI can we ensure that its promise is realized in a way that serves all of mankind.
TAN SRI LEE LAM THYE
CHAIRMAN
ALLIANCE FOR A SAFE COMMUNITY