Connect with us

TechNews

Privacy and Human Rights in an AI World

Cisco experts weigh in on the 2022 Data Privacy Benchmark Study, and what leaders can learn from it

Published

on

Privacy and Human Rights in an AI World - Image by Cisco
Image Credit: Cisco Newsroom

Technology is a force for positive change in our lives. But the more we depend on it, the more we generate data — a lot of data. 

That’s why data privacy has become such a critical issue. Increasingly, customers demand it, governments enforce it, and smart organisations build it into their strategies, processes, and products. 

But to find out just where we stand on data privacy, Cisco conducted its 2022 Data Privacy Benchmark Study.

Based on an anonymous survey of 4,900 security and IT professionals from 27 geographies, the report highlights some of the key trends and careabouts that are emerging in the privacy space, including the impact of AI. 

Among the key findings? A full 90 percent of respondents now see data privacy as a business imperative. Another 90 percent would not buy from an organisation that does not protect data. And these concerns remained consistent across regions and cultures. 

“Over the past few years, we’ve seen privacy mature and expand,” said Robert Waitman, Cisco’s director of data privacy and an author of the report. “It’s now critical for businesses because customers are driving a lot of the imperative. We’ve seen budgets expand, along with the benefits of those investments.”  

These changes also reflect a shift in awareness. Data privacy is viewed as a fundamental human right by the U.N., many governments, and companies like Cisco. So, the growing business imperative is increasingly infused with a higher purpose. 

“When I started in privacy almost 20 years ago, it was really a brand exercise,” said Harvey Jang, Cisco’s vice president and chief privacy officer. “You had marketing teams leading privacy. And then with GDPR [Europe’s General Data Protection Regulation], things shifted to the compliance side because companies feared getting fined 4% of revenue. Now, we’re seeing the pendulum swinging again, with privacy driven by business need and brand.”

Attitudes towards legislation also illustrate the mainstream embrace of data privacy. Fully 83 percent of respondents believe that data-privacy laws have a positive impact, with only 3 percent negative. 

“That’s an unbelievably strong endorsement of the many privacy laws that have been enacted around the world,” said Waitman. 

Advertisement

However, he was quick to add that compliance with such laws, which exist in about two-thirds of countries around the world, are increasingly viewed as table stakes. 

“Customers are saying, ‘we expect that your company specifies a clear privacy policy that aligns with ours,’” Waitman continued. “Customers often demand that organisations set higher standards than those specified in regulatory requirements.” 

Privacy in an era of emergent technologies

Despite progress in data privacy awareness, concerns persist.

“People we surveyed don’t feel they can adequately protect their data,” said Waitman. “They feel like they do not understand, control, or manage what is happening with their data.”

Increasingly, that translates into a hesitance around new technologies.

Forty-six percent of respondents do not understand what organisations are collecting and doing with their data, and this may limit their interest in new technologies like AI. “People are reticent to engage with new technologies,” Jang stressed.

Another 56 percent of respondents expressed concerns about how businesses are using AI today. “Respondents are concerned about how businesses may be using AI to make automated decisions that may materially impact their lives,” said Waitman. 

As new technologies emerge, Cisco is considering its approach to data privacy in a way that ensures a continuous engagement with customers and their feedback.

“The applications of AI are wide-ranging, and increasingly important to our customers,” said Anurag Dhingra, Cisco Vice President and CTO for Collaboration. “We need to be sure that we are building systems that are fair and equitable and serve Cisco’s mission to power a secure and inclusive future for all.”

Advertisement

“Our teams are applying AI to solve all sorts of problems,” continued Dhingra. “Everything from managing security threats, optimising networks, and powering inclusive collaboration needs to be managed in a way that is true to Cisco’s mission.” 

In the technology industry, several companies are considering governance of AI in a way that is ethical and responsible. This has been in response to several technological and operational challenges, particularly around bias and diversity.

“We need to ensure that explicit and implicit human biases do not get ingrained or amplified in AI systems that we are building,” Dhingra warned. “It’s a big challenge, but it’s all about fairness and privacy and security.”

“Diversity and inclusion are key ingredients in making sure that we are building systems that are fair,” he added. “They have to represent all of humanity and not just a narrow slice of it. A diverse, inclusive team is naturally going to build better systems.” 

A new trust standard

Dhingra is now leading a cross-functional executive team establishing standards and processes for responsible AI. However, establishing a foundation for this effort requires a comprehensive approach to the development process.

“The program is holistic in the sense that it creates guidelines for engineering teams — for how they think about these concerns, how they should evaluate these questions,” Dhingra said. “Our approach is to build the proper security, privacy and human rights controls into the full life cycle of product development.”

This approach requires clear guiding principles that are aligned with Cisco’s goals and operating plan. 

“We’ve established six foundational principles to guide our decision making and development around AI,” Dhingra said. The guiding principles are transparency, fairness, accountability, privacy, security, and reliability, all of which are relevant to AI’s impact on ethics and human rights.

“Our recommendation is that firms think hard about any use of AI where decision making may be somewhat hidden from the customer,” he concluded.

Advertisement
Advertisement
Click to comment

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Advertisement
Advertisement

Facebook