In the Data Insiders podcast, we discuss how AI is reshaping data security. With insights from Microsoft and Tietoevry, we cover risks, governance – and cultural shifts vital for protecting data.
In our Data Insiders podcast, we bring you up to speed on this topic with two leading experts: Microsoft’s Christine Dienel and Bharath Vijayaraghavan from Tietoevry Tech Services
Generative AI has rapidly revolutionized productivity. At the same time, AI solutions introduce new types of risks that organizations need to prepare for. In particular, unprecedented volumes of data coupled with human error can all too easily put sensitive data at risk.
In the new episode of our Data Insiders podcast, we sat down to explore the theme with two professionals dedicated to helping organizations navigate data-related issues. Christine Dienel is a Technical Specialist at Microsoft, where she works with the world’s top companies to turn security into a competitive advantage. Bharath Vijayaraghavan is a Lead AI Strategist at Tietoevry Tech Services, known for his extensive career in all things data and AI.
The resulting conversation approached the issue with nuance and diversity, covering not only technical perspectives, but also the equally important human element. In the grand scheme of things, we are all just taking the first steps in figuring out how to use these tools in our work – and that process includes learning the principles of safer use.
In some ways, the security issues raised by AI are not new. Rather, the technology has accentuated the challenges cybersecurity professionals have been working with for decades. Overexposure and oversharing become even greater issues when the amount of data increases rapidly – and when we don’t fully understand the tech we’re using.
“We constantly create a lot of unstructured data – it’s difficult to have full visibility on that”, Dienel notes.
Both guests agree that for this reason, developing a better understanding of your data should be a top priority when building AI solutions. A common thread among the cases of successful AI implementations is the focusing on governance from the get go.
Many companies start building AI solutions on top of systems already containing a lot of sensitive data.”
“Simply capturing everything is rarely a good strategy. By identifying the elements within the unstructured data, we can find out what’s actually important and reduce the risk of overexposure”, Vijayaraghavan states.
If messy rooms cause problems in the digital realm too, how can companies start making sense of their metaphorical dusty attics? Sorting through decades worth of unstructured data hardly sounds like anyone’s dream project.
Dienel suggests that this virtual KonMari is the perfect task for AI: there are many automation tools created for this very purpose.
“When labeling the data, it’s a good idea to have a taxonomy of no more than three levels or so. This creates a system that is easier for employees to understand. Documents that are labeled as high-risk can then be automatically encrypted so that they can’t be shared by accident.”
According to Dienel, this approach has benefits that go beyond security. When AI solutions are based on well-structured, curated data, the quality of the output increases too – as does the efficiency of the people using the tool.
There is no successful data governance without an understanding of the human element. With AI tools becoming an integral part of everyone’s work routines, the statement rings truer than ever.
“Rather than just a technical term, data governance should be made part of your company’s mission statement”, Vijayaraghavan claims.
Dienel elaborates on this idea by describing the ways data security can become ingrained in company cultures.
Making sure you involve the entire business when implementing these technologies is crucial. It shouldn’t be just the IT department’s responsibility – you need to collaborate with everyone from business managers to HR, see how they use the data.”
The advice is useful for everyone: even at companies that don’t have their custom AI tools, the employees are likely to utilize 3rd party solutions in their work, such as ChatGPT. The risks of entering company data in these tools is not yet generally very well understood.
Still, Dienel says this doesn’t mean companies should discourage teams from using these applications. Instead, sharing information is the way forward. This might include measures like AI sandboxes, where employees can get acquainted with the tools within a safe environment. Investing in education goes a long way.
“At Microsoft, we offer a lot of internal training”, Dienel reveals.
“What matters is understanding that this is not a one-off project – we always need to keep up and maintain the dialogue with our teams.”
The #1 Nordics Data Podcast, invites top professionals and visionaries of data-driven business to share their learnings and insights. Find all episodes on Spotify!