Authors
Kailash Thiyagarajan, Independent Researcher, USA
Abstract
The rapid rise of Large Language Models (LLMs) has revolutionized AI-driven applications but has also raised critical concerns regarding bias, misinformation, security, and accountability. Recognizing these challenges, governments and regulatory bodies are formulating structured policies to ensure the responsible deployment of LLMs. This paper provides a comprehensive analysis of the global regulatory landscape, examining key legislative efforts such as the EU AI Act, the NIST AI Risk Management Framework, and industry-led auditing initiatives. We highlight the gaps in current frameworks and propose a structured policy approach that promotes both innovation and accountability. To achieve this, we introduce a multi-stakeholder governance model that integrates regulatory, technical, and ethical perspectives. The paper concludes by discussing the future trajectory of AI regulation and the critical role of standardized auditing in enhancing transparency and fairness in LLMs.
Keywords
LLM Auditing, AI regulation, Ethical AI, Algorithmic Transparency, Bias and Fairness in AI, Explainability