Across Africa, organisations leveraging AI now face more than 3,000 cyberattacks each week on average, according to new findings from Check Point Software Technologies.
The company says the challenges are getting worse as businesses adopt artificial intelligence across daily operations without matching security management.
The data comes from Check Point’s AI Threat Landscape Report covering January to February 2026 which shows that while companies roll out generative and agent-based AI tools, many do so with limited visibility over how these systems handle data or interact with internal platforms.
AI adoption is spreading fast across sectors. In many organisations, staff now rely on several AI tools at the same time for writing, coding, analysis and customer support tasks.
That spread has created what researchers describe as “Shadow AI”, where usage sits outside formal monitoring systems.
Check Point says this trend is increasing exposure to risks such as data leaks, credential theft and weak control over third-party integrations.
The report also notes that AI systems are being used not just as tools, but as semi-autonomous systems that can act within enterprise environments.
Speaking on the findings, Ian van Rensburg, Head of Security Engineering, Africa at Check Point Software Technologies said, “AI transformation is no longer theoretical, it’s happening right now,” said.
“But too many organisations are modernising faster than they are securing. That gap is quickly becoming one of the most serious business risks in the region.”
The report highlights a case where a developer used an AI-powered development setup to generate 88,000 lines of malware code in less than a week. Check Point says this reveals how AI can shorten development cycles for both legitimate and malicious purposes.
It also found that 90% of organisations using generative AI recorded high-risk prompt activity. In addition, one in every 31 prompts carried the risk of exposing sensitive information, including proprietary code and confidential business data.
Employees, on average, now use around 10 AI tools, usually without central approval or oversight. This creates gaps that traditional security systems, built around networks and endpoints, may not detect.
Check Point argues that organisations need to treat AI systems as core assets rather than add-on tools. The company recommends securing models, data flows, application programming interfaces and autonomous agents together, instead of focusing only on surrounding infrastructure.
Hendrik de Bruin said AI adoption requires stronger governance structures. He pointed to the need for clearer risk classification, improved visibility and defined accountability across teams deploying AI systems.
The report also pointed to policymakers as several African countries work on national AI strategies. It suggests that security measures should be built into AI frameworks from the start, rather than added later during implementation.
Check Point adds that fragmented adoption, where teams deploy separate AI tools without central coordination, increases the likelihood of weak points across systems. These gaps can affect both internal operations and supply chains connected to external partners.
The company maintains that traditional cybersecurity approaches are no longer sufficient on their own in environments where AI systems can act with limited human input. It says organisations need prevention-focused models that address threats before they cause disruption.
Organisations balancing innovation with stronger surveillance are more likely to manage risks effectively while maintaining operational trust.




