Top Pros and Cons of Disruptive Artificial Intelligence (AI) in InfoSec

Fig. 1. Swenson, Jeremy, Stock; AI and InfoSec Trade-offs. 2024.

Disruptive technology refers to innovations or advancements that significantly alter the existing market landscape by displacing established technologies, products, or services, often leading to the transformation of entire industries. These innovations introduce novel approaches, functionalities, or business models that challenge traditional practices, creating a substantial impact on how businesses operate (ChatGPT, 2024). Disruptive technologies typically emerge rapidly, offering unique solutions that are more efficient, cost-effective, or user-friendly than their predecessors.

The disruptive nature of these technologies often leads to a shift in market dynamics, digital cameras or smartphones for example. These with new entrants or previously marginalized players gain prominence while established entities may face challenges in adapting to the transformative changes (ChatGPT, 2024). Examples of disruptive technologies include the advent of the internet, mobile technology, and artificial intelligence (AI), each reshaping industries and societal norms. Here are four of the leading AI tools:

1.       OpenAI’s GPT:

OpenAI’s GPT (Generative Pre-trained Transformer) models, including GPT-3 and GPT-2, are predecessors to ChatGPT. These models are known for their large-scale language understanding and generation capabilities. GPT-3, in particular, is one of the most advanced language models, featuring 175 billion parameters.

2.       Microsoft’s DialoGPT:

DialoGPT is a conversational AI model developed by Microsoft. It is an extension of the GPT architecture but fine-tuned specifically for engaging in multi-turn conversations. DialoGPT exhibits improved dialogue coherence and contextual understanding, making it a competitor in the chatbot space.

3.       Facebook’s BlenderBot:

BlenderBot is a conversational AI model developed by Facebook. It aims to address the challenges of maintaining coherent and contextually relevant conversations. BlenderBot is trained using a diverse range of conversations and exhibits improved performance in generating human-like responses in chat-based interactions.

4.       Rasa:

Rasa is an open-source conversational AI platform that focuses on building chatbots and voice assistants. Unlike some other models that are pre-trained on large datasets, Rasa allows developers to train models specific to their use cases and customize the behavior of the chatbot. It is known for its flexibility and control over the conversation flow.

Here is a list of the pros and cons of AI-based infosec capabilities.

Pros of AI in InfoSec:

1. Improved Threat Detection:

AI enables quicker and more accurate detection of cybersecurity threats by analyzing vast amounts of data in real-time and identifying patterns indicative of malicious activities. Security orchestration, automation, and response (SOAR) platforms leverage AI to analyze and respond to security incidents, allowing security teams to automate routine tasks and respond more rapidly to emerging threats. Microsoft Sentinel, Rapid7 InsightConnect, and FortiSOAR are just a few of the current examples

2. Behavioral Analysis:

AI can perform behavioral analysis to identify anomalies in user behavior or network activities, helping detect insider threats or sophisticated attacks that may go unnoticed by traditional security measures. Behavioral biometrics, such as analyzing typing patterns, mouse movements and ram usage, can add an extra layer of security by recognizing the unique behavior of legitimate users. Systems that use AI to analyze user behavior can detect and flag suspicious activity, such as an unauthorized user attempting to access an account or escalate a privilege.

3. Enhanced Phishing Detection:

AI algorithms can analyze email patterns and content to identify and block phishing attempts more effectively, reducing the likelihood of successful social engineering attacks.

4. Automation of Routine Tasks:

AI can automate repetitive and routine tasks, allowing cybersecurity professionals to focus on more complex issues. This helps enhance efficiency and reduces the risk of human error.

5. Adaptive Defense Systems:

AI-powered security systems can adapt to evolving threats by continuously learning and updating their defense mechanisms. This adaptability is crucial in the dynamic landscape of cybersecurity.

6. Quick Response to Incidents:

AI facilitates rapid response to security incidents by providing real-time analysis and alerts. This speed is essential in preventing or mitigating the impact of cyberattacks.

Cons of AI in InfoSec:

1. Sophistication of Attacks:

As AI is integrated into cybersecurity defenses, attackers may also leverage AI to create more sophisticated and adaptive threats, leading to a continuous escalation in the complexity of cyberattacks.

2. Ethical Concerns:

The use of AI in cybersecurity raises ethical considerations, such as privacy issues, potential misuse of AI for surveillance, and the need for transparency in how AI systems operate.

3. Cost and Resource Intensive:

Implementing and maintaining AI-powered security systems can be resource-intensive, both in terms of financial investment and skilled personnel required for development, implementation, and ongoing management.

4. False Positives and Negatives:

AI systems are not infallible and may produce false positives (incorrectly flagging normal behavior as malicious) or false negatives (failing to detect actual threats). This poses challenges in maintaining a balance between security and user convenience.

5. Lack of Human Understanding:

AI lacks contextual understanding and human intuition, which may result in misinterpretation of certain situations or the inability to recognize subtle indicators of a potential threat. This is where QA and governance come in case something goes wrong.

6. Dependency on Training Data:

AI models rely on training data, and if the data used is biased or incomplete, it can lead to biased or inaccurate outcomes. Ensuring diverse and representative training data is crucial to the effectiveness of AI in InfoSec.

About the author:

Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist / researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.