Today, every company wants to be an AI company, yet only 1% of firms consider themselves fully mature in AI adoption, according to McKinsey.
As we move from chatbots to copilots to autonomous AI agents or “agentic systems,” companies that haven’t already implemented AI risk losing significant ground to competitors. This could happen faster than they think.
Autonomous AI agents go beyond pre-defined scripts to handle nuanced interactions. They can not only generate content but make decisions and take action with limited or no human supervision. The move to intelligent, scalable digital labour represents a true revolution.
By 2028, Gartner forecasts that 33% of enterprise software applications will include agentic AI, enabling 15% of day-to-day work decisions to be made autonomously.
This shift has significant implications for businesses: the potential for a digital labour force to work alongside humans, reducing costs and driving innovation and scalability.
For the first time, workforces can be supplemented by autonomous AI agents working around the clock boosting productivity, efficiency, and competitive advantage.
Deloitte predicts that 25% of companies using generative AI will launch agentic AI pilots this year.
Across every industry, AI agents are making a significant impact. In customer service, they offer 24/7 support, handling a broad range of issues. For inventory management, they automate tasks, optimise stock levels, and provide real-time insights.
In recruitment, they streamline the hiring process by screening resumes, scheduling interviews, and conducting initial assessments, reducing the workload on human recruiters.
By taking over repetitive tasks, AI agents allow workers to focus on high-value contributions, driving creativity, strategy, and meaningful impact.
Beyond business, this technology is improving students’ academic performance by providing personalised tutoring. In healthcare, AI agents reduce administrative burdens, allowing professionals to focus on complex cases and monitor patient progress, leading to better health outcomes.
The shift to agentic AI systems brings disruptions and risks, not least around trust and data accuracy. Trusting the technology is key to integrating agents.
According to Salesforce research, 93% of global desk workers don’t consider AI outputs completely trustworthy for work-related tasks. Sixty percent of consumers say advances in AI make trust even more important.]
To build trust, it’s crucial to ensure that AI systems use accurate and relevant data, maintain privacy, and operate within ethical and legal boundaries. This means implementing robust data governance and oversight.
AI agents must also be transparent and explainable, so users know when they are interacting with an AI and how it operates. Clear accountability is essential to define responsibility for the agent’s performance and trusted outputs.
The solution to increasing productivity and building trust is not as simple as implementing AI agents immediately, according to a new Salesforce white paper.
The white paper lays out key design considerations for policymakers to keep in mind outlines key considerations for designing and using AI agents, and how global policymakers can adopt and unlock AI’s full potential.
To achieve a smooth and beneficial integration, businesses, governments, non-profits, and academia must collaborate to create comprehensive guidelines and guardrails.
Continuous training programs are also key. They help AI stay up-to-date and work effectively alongside humans, enhancing productivity, and allowing employees to focus on more strategic tasks.
Without proper oversight, autonomous AI can make decisions that conflict with human values or ethics, leading to loss of trust, legal issues, and damaged reputations. To avoid these risks, a multistakeholder approach is essential.
It’s no longer a question of whether AI agents should be integrated into workforces – but how best to optimise human and digital labour working together to reach desired goals.
Although AI agents are the latest technology breakthrough, the fundamental principles of sound AI public policy that protects people and fosters innovation remain unchanged: risk-based approaches, with clear delineation of the different roles in the ecosystem, supported by robust privacy, transparency, and safety guardrails.
By addressing these concerns, we can envision a future with new levels of productivity and prosperity, driven by a digital workforce that continuously learns and improves.