Skip to main content

On-demand webinar coming soon...

Blog

Where does AI fit in the security team’s processes?

New tools have transformed the security landscape, enabling ways to safeguard against evolving threats

Tim Mullen, Chief Information Security Officer
& Julian Head, Director of Information Security Architecture
December 6, 2023

IT employee diagnoses server equipment in a server room.

The growing capabilities and accessibility of artificial intelligence (AI) tools are making them an indispensable part of the modern “corporate toolbox.” Just as marketing and communications teams are using AI tools to mine customer data for market opportunities, security teams are turning to AI tools to maintain a competitive advantage in their operational environment.  

AI algorithms, for example, allow security professionals to analyze vast amounts of security data for signs of internal or external attacks, or to identify activities of third-party suppliers that may pose risks to the company’s security environment. AI tools can also be used to scan networks for vulnerabilities, automate tasks involved in incident responses, and identify anomalies that may indicate malicious activity.

In this post, we’ll look at the ways that security professionals can get the most benefit from using AI tools to create and maintain a robust corporate security environment.

Learn more about privacy and risk management in the age of AI in our webinar Building your AI inventory: Strategies for evolving privacy and risk management programs.

 

How to prompt AI for security use 

The key to using an AI model to detect and identify security threats is to provide it with instructions that define the task you need its algorithm to perform, a process known as prompting. Here are some guidelines for how to get the most effective results from your AI prompts.

  • Specify your security objectives clearly and simply: Write prompts that will detect and identify threats of particular interest. Vague or ambiguous prompts will lead to inaccurate results. 

  • Define the scope of the security task: Provide a network IP range or specify which devices should be accessed, for example.

  • Provide context: Give the AI context about the environment and the security challenges you’re facing using relevant, industry-specific language.

  • Ask for analysis and recommendations: Instead of just requesting raw data, ask the AI to analyze the data and make recommendations. Ask it to identify specific security threats and suggest mitigation strategies.

  • Review and validate AI results: AI models can make mistakes. They should be used in conjunction with human review and decision-making.

 

How to create pattern recognition to remove manual processes

The growing preference to use AI tools in place of manual, human-based analyses stems from the abilities of machine learning (ML)-based algorithms to analyze huge amounts of data quickly and efficiently, identify patterns indicative of potential security threats, and make predictions to help security teams respond to future threats — all without getting tired, hungry or distracted. The ML algorithms can also detect threats in real-time, allowing them to stop attacks before they can cause damage.

Data discrepancies or attack chains can sometimes be too subtle or too dispersed in time for humans with limited time or data analysis skills to recognize. Humans can also be biased in their thinking, which can cloud their judgment when analyzing data. ML algorithms, by contrast, can review data for tell-tale patterns of intrusion more objectively.

Here are some of the ways that your security team can use ML pattern-recognition algorithms to strengthen and complement manual, human-based analyses. 

  • Analyze network traffic for suspicious activity: ML algorithms can detect unusual patterns such as traffic spikes, changes in access, or requests for access not related to duties. 

  • Analyze security logs to identify potential attacks: ML algorithms can analyze security logs to identify incidents such as failed log-in attempts or unauthorized access to sensitive data.

  • Analyze user behavior to identify insider threats: ML algorithms can detect unusual insider threats such as attempts to access sensitive data or exfiltrate data from your organization.

  • Analyze social media data to identify potential threats: ML algorithms can analyze social media data to identify potential threats of violence or terrorist activity.

ML algorithms, however, should never be used without review by human security analysts. While algorithms operate on data and patterns, humans are better equipped to interpret and apply real-world context to ML outputs. ML algorithms are limited by the data they’re trained on and may not be able to make decisions consistent with common sense or human values. Generally, a partnership between AI tools and your security staff will provide the greatest opportunity for developing creative and appropriate solutions to unforeseen security events.

 

“AI/ML tools can provide a force multiplier to human intelligence to help you create a robust, resilient security environment. Human intelligence is critical for evaluating the results of AI/ML solutions, providing corrective guidance, and responding to unforeseen circumstances.” 

— Julian Head, Director, Information Security Architecture, OneTrust

 

Your security team will also play a central and critical role in training ML algorithms, which should be done continuously. They’ll need to carefully curate, clean, and prepare the data used to train algorithms, ideally using actual logs and forensic analysis of historic events to ensure the quality and relevance of AI-generated results. If your training data does not cover a security scenario of interest, the chances of identifying and classifying such an event in the future are very low. 

 

AI creates better security detection

Fortified by their abilities to analyze data at scale, identify patterns and anomalies, and learn from the past to improve their performance over time, AI tools can be a welcome addition to your corporate security portfolio. Here are some key tasks where AI tools can have an immediate impact on your internal and external security environment.

  • Threat detection: ML algorithms can detect malware, phishing attacks, and other cyber threats in real time. They can also analyze network traffic, email messages, and other data sources to identify suspicious activity.

  • Incident response: AI tools can help automate your incident response process, speeding up the time required to identify, contain, and remediate cyberattacks. 

  • Vulnerability assessment: ML algorithms can analyze code and system configurations to identify potential vulnerabilities. AI tools can also help identify and remediate vulnerabilities before they can be exploited by attackers. 

  • User behavior analytics: AI tools can be used to analyze user behavior data to identify anomalous activity indicative of a cyberattack. ML algorithms can generate profiles of normal user behavior and flag any deviations from those profiles.

  • Improved security posture: AI tools can be used to identify and eliminate access points to your organization’s systems and data, thereby reducing your overall attack surface.

 

Taking steps to deploy AI in your organization

Of course none of the aforementioned information can begin in earnest without the proper steps and best practices being implemented. Organizations will want to focus on baseline requirements before jumping into the AI deep end without a floatation device, including:

  • Creating an inventory for internal AI asset management

  • Updating (or creating) an AI policy and acceptable use policy

  • Mandating AI training for the company, specific training for employees using AI

  • Establishing an AI governance committee, or architectural review board (how are we using AI internally, what vendors are using it, what data can be used, etc.)

  • Update to third-party risk management process to include AI for vendors/partners/solutions

  • Recommend InfoSec controls for AI, based off NIST RMF

  • Project / solution design requirements for AI for the company

To learn more about putting AI tools to work fortifying your corporate security environment, please check out our AI Governance tool.


You may also like

Webinar

AI Governance

Governing data for AI

In this webinar, we’ll break down the AI development lifecycle and the key considerations for teams innovating with AI and ML technologies.

June 04, 2024

Learn more

Webinar

AI Governance

Embedding trust by design across the AI lifecycle

In this webinar, we’ll look at the AI development lifecycle and key considerations for governing each phase.

May 07, 2024

Learn more

Webinar

AI Governance

Navigating AI policy in the US: Insights on the OMB Announcement

This webinar will provide insights for navigating the pivotal intersection of the newly announced OMB Policy and the broader regulatory landscape shaping AI governance in the United States. Join us as we unpack the implications of this landmark policy on federal agencies and its ripple effects across the AI ecosystem.

April 18, 2024

Learn more

Webinar

AI Governance

Data privacy in the age of AI

In this webinar, we’ll discuss the evolution of privacy and data protection for AI technologies.

April 17, 2024

Learn more

Resource Kit

AI Governance

OneTrust's journey to AI governance resource toolkit

What actually goes into setting up an AI governance program? Download this resource kit to learn how OneTrust is approaching our own AI governance, and our experience may help shape yours.

April 11, 2024

Learn more

White Paper

AI Governance

Getting started with AI governance: Practical steps and strategies

Download this white paper to explore key drivers of AI and the challenges organizations face in navigating them, ultimately providing practical steps and strategies for setting up your AI governance program.

March 08, 2024

Learn more

Webinar

AI Governance

AI regulations in North America

In this webinar, we’ll discuss key updates and drivers for AI policy in the US; examining actions being taken by the White House, FTC, NIST, and the individual states. 

March 05, 2024

Learn more

In-Person Event

Responsible AI

Data Dialogues: Implementing Responsible AI

Learn how privacy, GRC, and data professionals can assess AI risk, ensure transparency, and enhance explainability in the deployment of AI and ML technologies.

February 23, 2024

Learn more

Webinar

AI Governance

Global trends shaping the AI landscape: What to expect

In this webinar, OneTrust DataGuidance and experts will examine global developments related to AI, highlighting key regulatory trends and themes that can be expected in 2024.

February 13, 2024

Learn more

Webinar

AI Governance

The EU AI Act

In this webinar, we’ll break down the four levels of AI risk under the AI Act, discuss legal requirements for deployers and providers of AI systems, and so much more.

February 06, 2024

Learn more

Webinar

Responsible AI

Preparing for the EU AI Act: Part 2

Join Sidley and OneTrust DataGuidance for a reactionary webinar to unpack the recently published, near-final text of the EU AI Act.

February 05, 2024

Learn more

Webinar

Privacy Management

Data Privacy Day 2024: Reflecting on the past year and anticipating the next

Join our panel of expert privacy professionals as they dissect the key happenings in 2023 and how privacy professionals can approach what may occur in 2024.

January 31, 2024

Learn more

Webinar

AI Governance

Getting started with AI Governance

In this webinar we’ll look at the AI Governance landscape, key trends and challenges, and preview topics we’ll dive into throughout this masterclass.

January 16, 2024

Learn more

Webinar

AI Governance

First Annual Generative AI Survey: Business Rewards vs. Security Risks Panel Discussion

OneTrust sponsored the first annual Generative AI survey, published by ISMG, and this webinar breaks down the key findings of the survey’s results.

January 12, 2024

Learn more

Report

AI Governance

ISMG's First annual generative AI study - Business rewards vs. security risks: Research report

OneTrust sponsored the first annual ISMG generative AI survey: Business rewards vs. security risks.

January 04, 2024

Learn more

Webinar

AI Governance

Building your AI inventory: Strategies for evolving privacy and risk management programs

In this webinar, we’ll talk about setting up an AI registry, assessing AI systems and their components for risk, and unpack strategies to avoid the pitfalls of repurposing records of processing to manage AI systems and address their unique risks. 

December 19, 2023

Learn more

eBook

AI Governance

Navigating the draft EU AI Act

With the use of AI proliferating at an exponential rate, the EU is in the process of rolling out a comprehensive, industry-agnostic regulation that looks to minimize AI’s risk while maximizing its potential.

November 17, 2023

Learn more

Infographic

Responsible AI

EU AIA Conformity Assessment: A step-by-step guide

A Conformity Assessment is the process of verifying and/or demonstrating that a “high- risk AI system” complies with the requirements of the EU AI Act. Download the infographic for a step-by-step guide to perform one.

November 17, 2023

Learn more

Webinar

Responsible AI

OneTrust AI Governance: Championing responsible AI adoption begins here

Join this webinar demonstrating how OneTrust AI Governance can equip your organization to manage AI systems and mitigate risk to demonstrate trust.

November 14, 2023

Learn more

White Paper

AI Governance

AI playbook: An actionable guide

What are your obligations as a business when it comes to AI? Are you using it responsibly? Learn more about how to go about establishing an AI governance team. 

October 31, 2023

Learn more

Webinar

The Shifting US Privacy Landscape: Lessons learned from enforcement actions and emerging trends

Stay ahead of US privacy laws as we explore the lessons learned from CCPA and FTC enforcement and how AI is effecting the regulatory landscape.

October 12, 2023

Learn more

Infographic

AI Governance

The Road to AI Governance: How to get started

AI Governance is a huge initiative to get started with for your organization. From data mapping your AI inventory to revising assessments of AI systems, put your team in a position to ensure responsible AI use across all departments.

October 06, 2023

Learn more

White Paper

AI Governance

How to develop an AI governance program

Download this white paper to learn how your organization can develop an AI governance team to carry out responsible AI use in all use cases.

October 06, 2023

Learn more

eBook

Responsible AI

AI, Chatbots, and beyond: Your questions answered

We answer your questions about AI and chatbot privacy concerns and how it is changing the global regulatory landscape.

August 08, 2023

Learn more

Webinar

Responsible AI

Unpacking the EU AI Act and its impact on the UK

Prepare your business for EU AI Act and its impact on the UK with this expert webinar. We explore the Act's key points and requirements, building an AI compliance program, and staying ahead of the rapidly changing AI regulatory landscape.

July 12, 2023

Learn more

Webinar

AI Governance

The EU's AI Act and developing an AI compliance program

Join Sidley and OneTrust DataGuidence as we discuss the proposed EU AI Act, the systems and organizations that it covers, and how to stay ahead of upcoming AI regulations.

May 30, 2023

Learn more

White Paper

AI Governance

Data protection and fairness in AI-driven automated data processing applications: A regulatory overview

With AI systems impacting our lives more than ever before, it's crucial that businesses understand their legal obligations and responsible AI practices.  

May 15, 2023

Learn more

Webinar

AI Governance

AI regulation in the UK – The current state of play

Join OneTrust and their panel of experts as they explore Artificial Intelligence regulation within the UK, sharing invaluable insights into where we are and what’s to come.

March 20, 2023

Learn more

Regulation Book

AI Governance

AI Governance: A consolidated reference

Download this reference book and have foundational AI governance documents at your fingertips as you position your organization to meet emerging AI regulations and guidelines.

Learn more

Webinar

AI Governance

AI governance masterclass

Navigate global AI regulations and identify strategic steps to operationalize compliance with the AI governance masterclass series.

Learn more