Kenya plans to criminalise the use of “high-risk” artificial intelligence (AI) systems without state approval, targeting tools used in credit scoring, biometrics, and health diagnostics, in a move that could slow product launches and increase legal risks for startups using AI.
A draft Artificial Intelligence Bill 2026 sponsored by Senator Karen Nyamu proposes that “a person shall not develop, deploy or operate a high-risk artificial intelligence system without the approval of the commission,” introducing fines of up to KES 5 million ($38,000) or jail terms of up to three years.
The proposal comes as AI adoption picks up across Kenya’s tech sector, from loan approvals and hiring decisions to fraud detection and customer service, placing systems that directly shape access to money and jobs under potential state control.
The key question is how “high-risk” AI will be defined in practice for founders. A broad definition could pull in tools used in finance, health, education, and even general-purpose models embedded in local apps. The rules would also extend criminal liability to company directors, placing personal risk on executives who sign off on deployments.
Startups often rely on rapid iteration and third-party APIs from global providers, which can create pre-approval requirements as a potential bottleneck.
The bill would create an AI commissioner with powers to classify systems, grant approvals, and maintain a public register of AI tools in use. It also allows regulators to inspect systems, data, and related records, widening oversight into how products are built and deployed.
That shifts enforcement beyond civil penalties into criminal law, a step that sets Kenya apart from many markets like the European Union and the United Kingdom, where AI rules rely on audits, compliance checks, and administrative fines rather than jail terms for deployment.
The bill opens access to proprietary datasets and model documentation by allowing inspections of systems and records. Companies handling sensitive data or building in-house models may face pressure to balance compliance with protecting trade secrets.
Mike Olukoye, a Nairobi-based tech legal expert, told TechCabal on Tuesday that tighter controls are needed as AI systems begin “to influence credit decisions, hiring outcomes and access to services.” In that context, Olukoye argued that early oversight is needed to limit harm before it scales.


