🛡️ Cyberattack on St. Paul Disrupts Systems, Triggers National Guard Response: A Wake-Up Call for City Infrastructure and Public-Private Security

Fig. 1. St. Paul Cyber Attack, St. Paul, 2025.

A major cyberattack brought critical systems across the City of St. Paul to a halt this week, prompting Governor Tim Walz to take the rare step of activating the Minnesota National Guard’s 177th Cyber Protection Team through Executive Order 24-25. The breach, which has yet to be fully disclosed in technical detail, forced the shutdown of municipal networks, libraries, payment systems, and internal applications—raising alarms about the fragility of local government infrastructure in the digital age.

This crisis has not only impacted operations but also exposed deeper vulnerabilities—from disruption of city services to potential legal and evidentiary breakdowns, especially concerning the chain of custody for digital evidence and sensitive case management platforms used by law enforcement and legal teams.

“The cyberattack… has resulted in a disruption of city services and operations, and the city has requested assistance from the State of Minnesota in the form of technical expertise and personnel,” Gov. Walz stated in the executive order. “The incident poses a threat to the delivery of critical government services.” (Walz, 2025)


Legal and Infrastructure Ramifications:

One often overlooked consequence of cyberattacks on public systems is the risk to legal integrity. City governments often store digital evidence for court cases, police body cam footage, and case records within networked systems. When such systems are compromised or taken offline, the chain of custody—a legal requirement for maintaining the integrity of evidence—may be broken. This could lead to dismissed charges, delayed court proceedings, or contested verdicts.

Beyond the courts, St. Paul’s systems underpin essential infrastructure. From 911 backend operations to building permits, utility management, and emergency communications, these disruptions ripple into residents’ lives and civic trust. Any delay in fire dispatch systems, real-time weather alerts, or even payroll processing for emergency responders can escalate into broader crisis.


Why Public-Private Partnerships Are Essential:

The attack illustrates the need for stronger collaboration between public entities and private cybersecurity firms. Municipalities often operate with limited budgets, aging infrastructure, and insufficient security staff. In contrast, private-sector vendors—ranging from cloud security providers to endpoint monitoring specialists—offer scalable defenses and expertise that cities can’t always sustain in-house.

Governor Walz’s executive order underscores this reality, stating:

“Cooperation between the Minnesota Department of Information Technology Services (MNIT), the National Guard, and other partners is necessary to protect public assets and respond to cybersecurity threats.” (Walz, 2025)

This partnership must also extend beyond technical vendors. Insurance carriers, legal risk consultants, and incident response firms should be part of proactive city planning, not just post-breach triage.


The Human Factor: Employee Training Matters:

While technical systems are critical, human error remains the top vector for cyberattacks, especially through phishing and social engineering. A well-crafted phishing email clicked by a single city employee can introduce malware into core systems.

St. Paul’s situation shows how cybersecurity education is no longer optional. Ongoing staff training—including:

  • Simulated phishing attacks
  • Clear escalation protocols
  • “Stop and verify” culture for email attachments and access requests

…is essential. Cities should treat their staff as the first line of defense, not just passive users.


The Road Ahead: What Cities Must Do Now:

The cyberattack on St. Paul should serve as a regional and national inflection point. Other cities must take this as a cue to reassess their cyber posture through the following:

Strategic Priorities:

  1. Zero Trust Implementation Limit internal access and require constant authentication, even for trusted users.
  2. Third-Party Risk Audits Review vendors, contractors, and outsourced services for security gaps.
  3. Resilient Backup and Recovery Ensure data is stored offsite and tested regularly for recovery readiness.
  4. Legal and Digital Forensics Planning Build frameworks for protecting the chain of custody in case of breach.
  5. Integrated Public-Private Playbooks Define shared roles between city staff, Guard units, and private partners in cyber response drills.
  6. Community Transparency Proactively inform the public about risks, responses, and what’s being done to rebuild digital trust.

Final Thoughts:

The breach in St. Paul is not just a local IT issue—it is a civic security event that affects courts, emergency services, legal integrity, and public confidence. Governor Walz’s activation of the National Guard is a bold signal that digital defense is now a matter of public safety.

“Immediate action is necessary to provide technical support and ensure continuity of operations,” reads Executive Order 24-25 (Walz, 2025).

Moving forward, public-private partnerships, cybersecurity training, and legal readiness must become foundational to how cities govern in the digital era. The stakes are no longer theoretical—they are real, operational, and deeply human.


References:

  1. FOX 9. (2025, July 29). Gov. Walz activates National Guard after cyberattack on city of St. Paul. https://www.fox9.com/news/gov-walz-activates-national-guard-after-cyberattack-st-paul
  2. KSTP. (2025, July 29). City of St. Paul experiencing unplanned technology disruptions. https://kstp.com/kstp-news/top-news/city-of-st-paul-experiencing-unplanned-technology-disruptions/
  3. League of Minnesota Cities. (2024, October). Cybersecurity Incident Reporting Requirements for Cities. https://www.lmc.org/news-publications/news/all/fonl-cybersecurity-incident-reporting-requirements/
  4. Reddit. (2025, July 29). Minnesota National Guard activated after city cyberattack [Discussion threads]. https://www.reddit.com/r/minnesota
  5. Walz, T. (2025, July 29). Executive Order 24-25: Activating the Minnesota National Guard Cyber Protection Team. Office of the Governor, State of Minnesota. https://mn.gov/governor/assets/EO-24-25_tcm1055-621842.pdf

About the Author:

Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. Over 17 years, he has held progressive roles at many banks, insurance companies, retailers, healthcare organizations, and even government entities. Organizations appreciate his talent for bridging gaps, uncovering hidden risk management solutions, and simultaneously enhancing processes. He is a frequent speaker, podcaster, and a published writer – CISA Magazine and the ISSA Journal, among others. He holds a certificate in Media Technology from Oxford University’s Media Policy Summer Institute, an MBA from Saint Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Cyber Security Summit Think Tank , the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy. He also has certifications from Intel and the Department of Homeland Security.

DeepSeek R1: A New Chapter in Global AI Realignment

Fig. 1. DeepSeek and Global AI Change Infographic, Jeremy Swenson, 2025.

Minneapolis—

DeepSeek, the Chinese artificial intelligence company founded by Liang Wenfeng and backed by High-Flyer, has continued to redefine the AI landscape since the explosive launch of its R1 model in late January 2025. Emerging from a background in quantitative trading and rapidly evolving into a pioneer in open-source LLMs, DeepSeek now stands as a formidable competitor to established systems like OpenAI’s ChatGPT and Microsoft’s proprietary models available on Azure AI. This article provides an expanded analysis of DeepSeek R1’s technical innovations, detailed comparisons with ChatGPT and Microsoft Azure AI offerings, and the broader economic, cybersecurity, and geopolitical implications of its emergence.


Technical Innovations and Architectural Advances:

Novel Training Methodologies DeepSeek R1 leverages a cutting-edge combination of pure reinforcement learning and chain-of-thought prompting to achieve human-like reasoning in tasks such as advanced mathematics and code generation. Unlike traditional LLMs that rely heavily on supervised fine-tuning, DeepSeek’s R1 is engineered to autonomously refine its reasoning steps, resulting in greater clarity and efficiency. In early benchmarking tests, R1 demonstrated the ability to solve multi-step arithmetic problems in approximately three minutes—substantially faster than ChatGPT’s o1 model, which typically required five minutes (Sayegh, 2025).

Cloud Integration and Open-Source Deployment One of R1’s key strengths lies in its open-source availability under an MIT license, a stark contrast to the closed ecosystems of its Western counterparts. Major cloud platforms have rapidly integrated R1: Amazon has deployed it via the Bedrock Marketplace and SageMaker, and Microsoft has incorporated it into its Azure AI Foundry and GitHub model catalog. This wide accessibility not only allows for extensive external scrutiny and customization but also enables enterprises to deploy the model locally, ensuring that sensitive data remains under domestic control (Yun, 2025; Sharma, 2025).


Detailed Comparison with ChatGPT:

Performance and Reasoning Clarity ChatGPT’s o1 model has been widely recognized for its robust reasoning capabilities; however, its closed-source nature limits transparency. In direct comparisons, DeepSeek R1 has shown parity—and in some cases superiority—with respect to reasoning clarity. Independent tests by developers indicate that R1’s intermediate reasoning steps are more comprehensible, facilitating easier debugging and iterative query refinement. For example, in complex multi-step problem-solving scenarios, R1 not only delivered correct solutions more rapidly but also provided detailed, human-like explanations of its thought process (Sayegh, 2025).

Cost Efficiency and Accessibility While premium access to ChatGPT’s capabilities can cost users upwards of $200 per month, DeepSeek R1 offers its advanced functionalities free of charge. This dramatic reduction in cost is achieved through efficient use of computational resources. DeepSeek reportedly trained R1 using only 2,048 Nvidia H800 GPUs at an estimated cost of $5.6 million—an expenditure that is a fraction of the resources typically required by U.S. competitors (Waters, 2025). Such cost efficiency democratizes access to high-performance AI, providing significant advantages for startups, academic institutions, and small businesses.


Detailed Comparison with Microsoft Azure AI:

Integration with Enterprise Platforms Microsoft has long been a leader in providing enterprise-grade AI solutions via Azure AI. Recently, Microsoft integrated DeepSeek R1 into its Azure AI Foundry, offering customers an additional open-source option that complements its proprietary models. This integration allows organizations to leverage R1’s powerful reasoning capabilities while enjoying the benefits of Azure’s robust security, compliance, and scalability. Unlike some closed-source models that require extensive licensing fees, R1’s open-access nature under Azure enables organizations to tailor the model to their specific needs, maintaining data sovereignty and reducing operational costs (Sharma, 2025).

Performance in Real-World Applications In practical applications, users on Azure have reported that DeepSeek R1 not only matches but sometimes exceeds the performance of traditional models in complex reasoning and mathematical problem-solving tasks. By deploying R1 locally via Azure, enterprises can ensure that sensitive computations are performed in-house, thereby addressing critical data privacy concerns. This localized approach is particularly valuable in regulated industries, where strict data governance is paramount (FT, 2025).


Market Reactions and Economic Implications:

Immediate Market Response and Stock Volatility The initial launch of DeepSeek R1 triggered a significant market reaction, most notably an 18% plunge in Nvidia’s stock as investors reassessed the cost structures underlying AI development. The disruption led to a combined market value wipeout of nearly $1 trillion across tech stocks, reflecting widespread concern over the implications of achieving top-tier AI performance with significantly lower computational expenditure (Waters, 2025).

Long-Term Investment Perspectives Despite the short-term volatility, many analysts view the current market corrections as a temporary disruption and a potential buying opportunity. The cost-efficient and open-source nature of R1 is expected to drive broader adoption of advanced AI technologies across various industries, ultimately spurring innovation and generating new revenue streams. Major U.S. technology firms, in response, are accelerating initiatives like the Stargate Project to bolster domestic AI infrastructure and maintain global competitiveness (FT, 2025).


Cybersecurity, Data Privacy, and Regulatory Reactions:

Governmental Bans and Regulatory Scrutiny DeepSeek’s practice of storing user data on servers in China and its adherence to local censorship policies have raised significant cybersecurity and privacy concerns. In response, U.S. lawmakers have proposed bipartisan legislation to ban DeepSeek’s software on government devices. Similar regulatory actions have been taken in Australia, South Korea, and Canada, reflecting a global trend of caution toward technologies with potential national security risks (Scroxton, 2025).

Security Vulnerabilities and Red-Teaming Results Independent cybersecurity tests have revealed that R1 is more prone to generating insecure code and harmful outputs compared to some Western models. These findings have prompted calls for more rigorous red-teaming and continuous monitoring to ensure that the model can be safely deployed at scale. The vulnerabilities underscore the necessity for both DeepSeek and its adopters to implement robust safety protocols to mitigate potential misuse (Agarwal, 2025).


Geopolitical and Strategic Implications:

Challenging U.S. AI Dominance DeepSeek R1’s emergence is a clear signal that high-performance AI can be developed without the massive resource investments traditionally associated with U.S. models. This development challenges the long-standing assumption of American technological supremacy and has prompted a strategic reevaluation among U.S. policymakers and industry leaders. In response, initiatives such as Microsoft’s Stargate Project are being accelerated to ensure that the U.S. maintains its competitive edge in the global AI arena (Karaian & Rennison, 2025).

Localized AI Ecosystems and Data Sovereignty To mitigate cybersecurity risks, several U.S. companies are now repackaging R1 for localized deployment. By ensuring that sensitive data remains on domestic servers, these firms are not only addressing privacy concerns but also paving the way for the creation of robust, localized AI ecosystems. This trend could ultimately reshape global data governance practices and alter the balance of technological power between the U.S. and China (von Werra, 2025).


Conclusion and Future Outlook:

DeepSeek R1 represents a watershed moment in the global AI race. Its technical innovations, cost efficiency, and open-source approach challenge entrenched assumptions about the necessity of massive compute power and proprietary control. In direct comparisons with systems like ChatGPT’s o1 and Microsoft’s Azure AI offerings, R1 demonstrates superior transparency and operational speed, while also offering unprecedented accessibility. Despite ongoing cybersecurity and regulatory challenges, the disruptive impact of R1 is catalyzing a broader realignment in AI development strategies. As both U.S. and Chinese technology ecosystems adapt to these shifts, the future of AI appears poised for a more democratized, competitively diverse, and strategically complex evolution.


About The Author:

Jeremy A. Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and seasoned senior management tech risk and digital strategy consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds a certificate in Media Technology from Oxford University’s Media Policy Summer Institute, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota’s Technological Leadership Institute, an MBA from Saint Mary’s University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale, and New Hope Community Police Academy (MN), and the Minneapolis FBI Citizens Academy. You can follow him on LinkedIn and Twitter.


References:

  1. Yun, C. (2025, January 30). DeepSeek-R1 models now available on AWS. Amazon Web Services Blog. Retrieved February 8, 2025, from https://aws.amazon.com/blogs/aws/deepseek-r1-models-now-available-on-aws/
  2. Sharma, A. (2025, January 29). DeepSeek R1 is now available on Azure AI Foundry and GitHub. Microsoft Azure Blog. Retrieved February 8, 2025, from https://azure.microsoft.com/en-us/blog/deepseek-r1-is-now-available-on-azure-ai-foundry-and-github/
  3. Waters, J. K. (2025, January 28). Nvidia plunges 18% and tech stocks slide as China’s DeepSeek spooks investors. Business Insider Markets. Retrieved February 8, 2025, from https://markets.businessinsider.com/news/stocks/nvidia-tech-stocks-deepseek-ai-race-nasdaq-2025-1
  4. Scroxton, A. (2025, February 7). US lawmakers move to ban DeepSeek AI tool. ComputerWeekly. Retrieved February 8, 2025, from https://www.computerweekly.com/news/366619153/US-lawmakers-move-to-ban-DeepSeek-AI-tool
  5. FT. (2025, January 28). The global AI race: Is China catching up to the US? Financial Times. Retrieved February 8, 2025, from https://www.ft.com/content/0e8d6f24-6d45-4de0-b209-8f2130341bae
  6. Agarwal, S. (2025, January 31). DeepSeek-R1 AI Model 11x more likely to generate harmful content, security research finds. Globe Newswire. Retrieved February 8, 2025, from https://www.globenewswire.com/news-release/2025/01/31/3018811/0/en/DeepSeek-R1-AI-Model-11x-More-Likely-to-Generate-Harmful-Content-Security-Research-Finds.html
  7. Karaian, J., & Rennison, J. (2025, January 28). The day DeepSeek turned tech and Wall Street upside down. The Wall Street Journal. Retrieved February 8, 2025, from https://www.wsj.com/finance/stocks/the-day-deepseek-turned-tech-and-wall-street-upside-down-f2a70b69
  8. von Werra, L. (2025, January 31). The race to reproduce DeepSeek’s market-breaking AI has begun. Business Insider. Retrieved February 8, 2025, from https://www.businessinsider.com/deepseek-r1-open-source-replicate-ai-west-china-hugging-face-2025-1
  9. Sayegh, E. (2025, January 27). DeepSeek is bad for Silicon Valley. But it might be great for you. Vox. Retrieved February 8, 2025, from https://www.vox.com/technology/397330/deepseek-openai-chatgpt-gemini-nvidia-china

8 Key AI Trends Driving Business Innovation in 2024 and Beyond

Minneapolis—

Artificial Intelligence (AI) continues to drive massive innovation across industries, reshaping business operations, customer interactions, and cybersecurity landscapes. As AI’s capabilities grow, companies are leveraging key trends to stay competitive and secure. Below are six crucial AI trends transforming businesses today, alongside critical insights on securing AI infrastructure, promoting responsible AI use, and enhancing workforce efficiency in a digital world.

1. Generative AI’s Creative Expansion

Generative AI, known for producing content from text and images to music and 3D models, is expanding its reach into business innovation.[1] AI systems like GPT-4 and DALL·E are being applied across industries to automate creativity, allowing businesses to scale their marketing efforts, design processes, and product innovation.

Business Application: Marketing teams are using generative AI to create personalized, dynamic campaigns across digital platforms. Coca-Cola and Nike, for instance, have employed AI to tailor advertising content to different customer segments, improving engagement and conversion rates. Product designers in industries like fashion and automotive are also using generative models to prototype new designs faster than ever before.

2. AI-Powered Personalization

AI’s ability to analyze vast datasets in real time is driving hyper-personalized experiences for consumers. This trend is especially important in sectors like e-commerce and entertainment, where personalized recommendations significantly impact user engagement and loyalty.

Business Application: Streaming platforms like Netflix and Spotify rely on AI algorithms to provide tailored content recommendations based on users’ preferences, viewing habits, and search history.[2] Retailers like Amazon are also leveraging AI to offer personalized shopping experiences, recommending products based on past purchases and browsing behavior, further boosting customer satisfaction.

3. AI-Driven Automation in Operations

Automation powered by AI is optimizing operations and processes across industries, from manufacturing to customer service. By automating repetitive and manual tasks, businesses are reducing costs, improving efficiency, and reallocating resources to higher-value activities.

Business Application: Tesla and Siemens are implementing AI in robotic process automation (RPA) to streamline production lines and monitor equipment for potential breakdowns. In customer service, AI chatbots and virtual assistants are being used to handle routine inquiries, providing real-time support to customers while freeing human agents to address more complex issues.

4. Securing AI Infrastructure and Development Practices

As AI adoption grows, so does the need for robust security measures to protect AI infrastructure and development processes. AI systems are vulnerable to cyberattacks, data breaches, and unauthorized access, highlighting the importance of securing AI from development to deployment.

Business Application: Organizations are recognizing the importance of securing AI models, data, and networks through multi-layered security frameworks. The U.S. AI Safety Institute Consortium is actively developing guidelines for AI safety and security, including red-teaming and risk management practices, to ensure AI systems are resilient to attacks. DevSecOps needs to be on the front end of this. To address challenges in securing AI, companies are pushing for standardization in AI audits and evaluations, ensuring consistency in security practices across industries.

5. AI in Predictive Analytics and Decision-Making

Predictive analytics, powered by AI, is enabling companies to forecast trends, predict consumer behavior, and make data-driven decisions with greater accuracy. This is particularly valuable in finance, healthcare, and retail, where anticipating demand or market shifts can lead to significant competitive advantages.

Business Application: Financial institutions like JPMorgan Chase are using AI for predictive analytics to evaluate market conditions, identify investment opportunities, and manage risk.[3] Retailers such as Walmart are employing AI to forecast inventory needs, helping to optimize supply chains and reduce waste. Predictive analytics also allows companies to make proactive decisions regarding customer retention and product development.

6. AI for Enhanced Cybersecurity

AI plays an increasingly pivotal role in improving cybersecurity defenses. AI-driven systems are capable of detecting anomalies, identifying potential threats, and responding to attacks in real-time, offering advanced protection for both physical and digital assets.

Business Application: Leading organizations are integrating AI into cybersecurity protocols to automate threat detection and enhance system defenses. IBM’s AI-powered QRadar platform helps companies identify and respond to cyberattacks by analyzing network traffic and detecting unusual activity.[4] AI systems are also improving identity authentication through biometrics, ensuring that only authorized users gain access to sensitive data.

Moreover, businesses are adopting AI governance frameworks to secure their AI infrastructure and ensure ethical deployment. Evaluating risks associated with open- and closed-source AI development allows for transparency and the implementation of tailored security strategies across sectors.

7. Promoting Responsible AI Use and Security Governance

Beyond technical innovation, AI governance and responsible use are paramount to ensure that AI is developed and applied ethically. Promoting responsible AI use means adhering to best practices and security standards to prevent misuse and unintended harm. The NIST AI risk management framework is a good reference for this.[5]

Business Application: Companies are actively developing frameworks that incorporate ethical principles throughout the lifecycle of AI systems. Microsoft and Google are leading initiatives to mitigate bias and ensure transparency in AI algorithms. Governments and private sectors are also collaborating to develop standardized guidelines and security metrics, helping organizations maintain ethical compliance and robust cybersecurity.

8. Enhancing Workforce Efficiency and Skills Development

AI’s role in enhancing workforce efficiency is not limited to automating tasks. AI-driven training and simulations are transforming how organizations develop and retain talent, particularly in cybersecurity, where skilled professionals are in high demand.

Business Application: Companies are investing in AI-driven educational platforms that simulate real-world cybersecurity scenarios, helping employees hone their skills in a dynamic, hands-on environment. These AI-powered platforms allow for personalized learning, adapting to individual skill levels and providing targeted feedback. Additionally, AI is being used to identify skill gaps within teams and recommend tailored training programs, improving workforce readiness for future challenges. Yet, people who are AI capable still need to support these apps and managerial efforts.

Conclusion: AI’s Role in Business and Security Transformation

As AI tools advance rapidly, it’s wise to assume they can access and analyze all publicly available content, including social media posts and articles like this one. While AI can offer valuable insights, organizations must remain vigilant about how these tools interact with one another, ensuring that application-to-application permissions are thoroughly scrutinized. Public-private partnerships, such as InfraGard, need to be strengthened to address these evolving challenges. Not everyone needs to be a journalist, but having the common sense to detect AI- or malware-generated fake news is crucial. It’s equally important to report any AI bias within big tech from perspectives including IT, compliance, media, and security.

Amid the AI hype, organizations should resist the urge to adopt every new tool that comes along. Instead, they should evaluate each AI system or use case based on measurable, real-world outcomes. AI’s rapid evolution is transforming both business operations and cybersecurity practices. Companies that effectively leverage trends like generative AI, predictive analytics, and automation, while prioritizing security and responsible use, will be better positioned to lead in the digital era. Securing AI infrastructure, promoting ethical AI development, and investing in workforce skills are crucial for long-term success.

Cloud infrastructure is another area that will continue to expand quickly, adding complexity to both perimeter security and compliance. Organizations should invest in AI-based cloud solutions and prioritize hiring cloud-trained staff. Diversifying across multiple cloud providers can mitigate risk, promote vendor competition, and ensure employees gain cross-platform expertise.

To navigate this complex landscape, businesses should adopt ethical, innovative, and secure AI strategies. Forming an AI governance committee is essential to managing the unique risks posed by AI, ensuring they aren’t overlooked or mistakenly merged with traditional IT risks. The road ahead holds tremendous potential, and those who proceed with careful consideration and adaptability will lead the way in AI-driven transformation.

About the Author:

Jeremy A. Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and seasoned senior management tech risk and digital strategy consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds a certificate in Media Technology from Oxford University’s Media Policy Summer Institute, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota’s Technological Leadership Institute, an MBA from Saint Mary’s University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale, and New Hope Community Police Academy (MN), and the Minneapolis FBI Citizens Academy. You can follow him on LinkedIn and Twitter.

References:


[1] PYMNTS. “AI Sparks a Creative Revolution in Business, With an Unexpected Twist.” 07/19/24. https://www.pymnts.com/artificial-intelligence-2/2024/ai-sparks-a-creative-revolution-in-business-with-an-unexpected-twist/

[2] Josifovski, Vanja. “The Future Of AI-Powered Personalization: The Potential Of Choices.” Forbes. https://www.forbes.com/councils/forbestechcouncil/2023/07/03/the-future-of-ai-powered-personalization-the-potential-of-choices/

[3] Son, Hugh. “JPMorgan Chase is giving its employees an AI assistant powered by ChatGPT maker OpenAI.” 08/09/24. https://www.cnbc.com/2024/08/09/jpmorgan-chase-ai-artificial-intelligence-assistant-chatgpt-openai.html

[4] Culafi, Alexander. “IBM launches AI-powered security offering QRadar Suite.” Tech Target. 04/23/23. https://www.techtarget.com/searchsecurity/news/365535549/IBM-launches-AI-powered-security-offering-QRadar-Suite

[5] NIST. “AI Risk Management Framework.” 07/26/24. https://www.nist.gov/itl/ai-risk-management-framework

Four Key Emerging Considerations with Artificial Intelligence (AI) in Cyber Security

#cryptonews #cyberrisk #techrisk #techinnovation #techyearinreview #infosec #musktwitter #disinformation #cio #ciso #cto #chatgpt #openai #airisk #iam #rbac #artificialintelligence #samaltman #aiethics #nistai #futurereadybusiness #futureofai

By Jeremy Swenson

Fig. 1. Zero Trust Components to Orchestration AI Mashup; Microsoft, 09/17/21; and Swenson, Jeremy, 03/29/24.

1. The Zero-Trust Security Model Becomes More Orchestrated via Artificial Intelligence (AI):

      The zero-trust model represents a paradigm shift in cybersecurity, advocating for the premise that no user or system, irrespective of their position within the corporate network, should be automatically trusted. This approach entails stringent enforcement of access controls and continual verification processes to validate the legitimacy of users and devices. By adopting a need-to-know-only access philosophy, often referred to as the principle of least privilege, organizations operate under the assumption of compromise, necessitating robust security measures at every level.

      Implementing a zero-trust framework involves a comprehensive overhaul of traditional security practices. It entails the adoption of single sign-on functionalities at the individual device level and the enhancement of multifactor authentication protocols. Additionally, it requires the implementation of advanced role-based access controls (RBAC), fortified network firewalls, and the formulation of refined need-to-know policies. Effective application whitelisting and blacklisting mechanisms, along with regular group membership reviews, play pivotal roles in bolstering security posture. Moreover, deploying state-of-the-art privileged access management (PAM) tools, such as CyberArk for password check out and vaulting, enables organizations to enhance toxic combination monitoring and reporting capabilities.

      App-to-app orchestration refers to the process of coordinating and managing interactions between different applications within a software ecosystem to achieve specific business objectives or workflows. It involves the seamless integration and synchronization of multiple applications to automate complex tasks or processes, facilitating efficient data flow and communication between them. Moreover, it aims to streamline and optimize various operational workflows by orchestrating interactions between disparate applications in a cohesive manner. This orchestration process typically involves defining the sequence of actions, dependencies, and data exchanges required to execute a particular task or workflow across multiple applications.

      However, while the concept of zero-trust offers a compelling vision for fortifying cybersecurity, its effective implementation relies on selecting and integrating the right technological components seamlessly within the existing infrastructure stack. This necessitates careful consideration to ensure that these components complement rather than undermine the orchestration of security measures. Nonetheless, there is optimism that the rapid development and deployment of AI-based custom middleware can mitigate potential complexities inherent in orchestrating zero-trust capabilities. Through automation and orchestration, these technologies aim to streamline security operations, ensuring that the pursuit of heightened security does not inadvertently introduce operational bottlenecks or obscure visibility through complexity.

      2. Artificial Intelligence (AI) Powered Threat Detection Has Improved Analytics:

      The utilization of artificial intelligence (AI) is on the rise to bolster threat detection capabilities. Through machine learning algorithms, extensive datasets are scrutinized to discern patterns suggestive of potential security risks. This facilitates swifter and more precise identification of malicious activities. Enhanced with refined machine learning algorithms, security information and event management (SIEM) systems are adept at pinpointing anomalies in network traffic, application logs, and data flow, thereby expediting the identification of potential security incidents for organizations.

      There will be reduced false positives which has been a sustained issue in the past with large overconfident companies repeatedly wasting millions of dollars per year fine tuning useless data security lakes that mostly produce garbage anomaly detection reports [1], [2]. Literally the kind good artificial intelligence (AI) laughs at – we are getting there. All the while, the technology vendors try to solve this via better SIEM functionality for an increased price at present. Yet we expect prices to drop really low as the automation matures.  

      With enhanced natural language processing (NLP) methodologies, artificial intelligence (AI) systems possess the capability to analyze unstructured data originating from various sources such as social media feeds, images, videos, and news articles. This proficiency enables organizations to compile valuable threat intelligence, staying abreast of indicators of compromise (IOCs) and emerging attack strategies. Notable vendors offering such services include Dark Trace, IBM, CrowdStrike, and numerous startups poised to enter the market. The landscape presents ample opportunities for innovation, necessitating the abandonment of past biases. Young, innovative minds well-versed in web 3.0 technologies hold significant value in this domain. Consequently, in the future, more companies are likely to opt for building their tailored threat detection tools, leveraging advancements in AI platform technology, rather than purchasing pre-existing solutions.

      3. Artificial Intelligence (AI) Driven Threat Response Ability Advances:

      Artificial intelligence (AI) isn’t just confined to threat detection; it’s increasingly playing a pivotal role in automating response actions within cybersecurity operations. This encompasses a range of tasks, including the automatic isolation of compromised systems, the blocking of malicious internet protocol (IP) addresses, the adjustment of firewall configurations, and the coordination of responses to cyber incidents—all achieved with greater efficiency and cost-effectiveness. By harnessing AI-driven algorithms, security orchestration, automation, and response (SOAR) platforms empower organizations to analyze and address security incidents swiftly and intelligently.

      SOAR platforms capitalize on AI capabilities to streamline incident response processes, enabling security teams to automate repetitive tasks and promptly react to evolving threats. These platforms leverage AI not only to detect anomalies but also to craft tailored responses, thereby enhancing the overall resilience of cybersecurity infrastructures. Leading examples of such platforms include Microsoft Sentinel, Rapid7 InsightConnect, and FortiSOAR, each exemplifying the fusion of AI-driven automation with comprehensive security orchestration capabilities.

      Microsoft Sentinel, for instance, utilizes AI algorithms to sift through vast volumes of security data, identifying potential threats and anomalies in real-time. It then orchestrates response actions, such as isolating compromised systems or blocking suspicious IP addresses, with precision and speed. Similarly, Rapid7 InsightConnect integrates AI-driven automation to streamline incident response workflows, enabling security teams to mitigate risks more effectively. FortiSOAR, on the other hand, offers a comprehensive suite of AI-powered tools for incident analysis, response automation, and threat intelligence correlation, empowering organizations to proactively defend against cyber threats. Basically, AI tools will help SOAR tools mature so security operations centers (SOCs) can catch the low hanging fruit; thus, they will have more time for analysis of more complex threats. These AI tools will employ the observe, orient, decide, act (OODA) Loop methodology [3]. This will allow them to stay up to date, customized, and informed of many zero-day exploits. At the same time, threat actors will constantly try to avert this with the same AI but with no governance.

        4. Artificial Intelligence (AI) Streamlines Cloud Security Posture Management (CSPM):

        With the escalating migration of organizations to cloud environments, safeguarding the security of cloud assets emerges as a paramount concern. While industry giants like Microsoft, Oracle, and Amazon Web Services (AWS) dominate this landscape with their comprehensive cloud offerings, numerous large organizations opt to establish and maintain their own cloud infrastructures to retain greater control over their data and operations. In response to the evolving security landscape, the adoption of cloud security posture management (CSPM) tools has become imperative for organizations seeking to effectively manage and fortify their cloud environments.

        CSPM tools play a pivotal role in enhancing the security posture of cloud infrastructures by facilitating continuous monitoring of configurations and swiftly identifying any misconfigurations that could potentially expose vulnerabilities. These tools operate by autonomously assessing cloud configurations against established security best practices, ensuring adherence to stringent compliance standards. Key facets of their functionality include the automatic identification of unnecessary open ports and the verification of proper encryption configurations, thereby mitigating the risk of unauthorized access and data breaches. “Keeping data safe in the cloud requires a layered defense that gives organizations clear visibility into the state of their data. This includes enabling organizations to monitor how each storage bucket is configured across all their storage services to ensure their data is not inadvertently exposed to unauthorized applications or users” [4]. This has considerations at both the cloud user and provider level especially considering artificial intelligence (AI) applications can be built and run inside the cloud for a variety of reasons. Importantly, these build designs often use approved plug ins from different vendors making it all the more complex.

        Furthermore, CSPM solutions enable organizations to proactively address security gaps and bolster their resilience against emerging threats in the dynamic cloud landscape. By providing real-time insights into the security status of cloud assets, these tools empower security teams to swiftly remediate vulnerabilities and enforce robust security controls. Additionally, CSPM platforms facilitate comprehensive compliance management by generating detailed reports and audit trails, facilitating adherence to regulatory requirements and industry standards.

        In essence, as organizations navigate the complexities of cloud adoption and seek to safeguard their digital assets, CSPM tools serve as indispensable allies in fortifying cloud security postures. By offering automated monitoring, proactive threat detection, and compliance management capabilities, these solutions empower organizations to embrace the transformative potential of cloud technologies while effectively mitigating associated security risks.

        About the Author:

        Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist / researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

        References:


        [1] Tobin, Donal; “What Challenges Are Hindering the Success of Your Data Lake Initiative?” Integrate.io. 10/05/22: https://www.integrate.io/blog/data-lake-initiative/

        [2] Chuvakin, Anton; “Why Your Security Data Lake Project Will … Well, Actually …” Medium. 10/22/22. https://medium.com/anton-on-security/why-your-security-data-lake-project-will-well-actually-78e0e360c292

        [3] Michael, Katina, Abbas, Roba, and Roussos, George; “AI in Cybersecurity: The Paradox.” IEEE Transactions on Technology and Society. Vol. 4, no. 2: pg. 104-109. 2023: https://ieeexplore.ieee.org/abstract/document/10153442

        [4] Rosencrance, Linda; “How to choose the best cloud security posture management tools.” CSO Online. 10/30/23: https://www.csoonline.com/article/657138/how-to-choose-the-best-cloud-security-posture-management-tools.html

        NIST Cybersecurity Framework (CSF) New Version 2.0 Summary

        Fig. 1. NIST CSF 2.0 Stepper, NIST, 2024.

        #cyberresilience #cybersecurity #generativeai #cyberthreats #enterprisearchitecture #CIO #CTO #riskmanagement #bias #governance #RBAC #CybersecurityFramework #Cybersecurity #NISTCSF #RiskManagement #DigitalResilience #nist #nistframework #cyberawareness

        The National Institute of Standards and Technology (NIST) has updated its widely used Cybersecurity Framework (CSF) — a free respected landmark guidance document for reducing cybersecurity risk. However, it’s important to note that most of the framework core has remained the same. Here are the core components the security community knows:

        Govern (GV): Sets forth the strategic path and guidelines for managing cybersecurity risks, ensuring harmony with business goals and adherence to legal requirements and standards. This is the newest addition which was inferred before but is specifically illustrated to touch every aspect of the framework. It seeks to establish and monitor your company’s cybersecurity risk management strategy, expectations, and policy.

        1.      Identify (ID): Entails cultivating a comprehensive organizational comprehension of managing cybersecurity risks to systems, assets, data, and capabilities.

        2.      Protect (PR): Concentrates on deploying suitable measures to guarantee the provision of vital services.

        3.      Detect (DE): Specifies the actions for recognizing the onset of a cybersecurity incident.

        4.      Respond (RS): Outlines the actions to take in the event of a cybersecurity incident.

        5.      Recover (RC): Focuses on restoring capabilities or services that were impaired due to a cybersecurity incident.

        The new 2.0 edition is structured for all audiences, industry sectors, and organization types, from the smallest startups and nonprofits to the largest corporations and government departments — regardless of their level of cybersecurity preparedness and complexity.

        Fig. 2. NIST CSF 2.0 Function Breakdown, NIST, 2024.

        Here are some key updates:

        Emphasis is placed on the framework’s expanded scope, extending beyond critical infrastructure to encompass all organizations. Importantly, it better incorporates and expands upon supply chain risk management processes. It also introduces a new focus on governance, highlighting cybersecurity as a critical enterprise risk with many dependencies. This is critically important with the emergence of artificial intelligence.

        To make it easier for a wide variety of organizations to implement the CSF 2.0, NIST has developed quick-start guides customized for various audiences, along with case studies showcasing successful implementations, and a searchable catalog of references, all aimed at facilitating the adoption of CSF 2.0 by diverse organizations.

        The CSF 2.0 is aligned with the National Cybersecurity Strategy and includes a suite of resources to adapt to evolving cybersecurity needs, emphasizing a comprehensive approach to managing cybersecurity risk. New adopters can benefit from implementation examples and quick-start guides tailored to specific user types, facilitating easier integration into their cybersecurity practices. The CSF 2.0 Reference Tool simplifies implementation, enabling users to access, search, and export core guidance data in user-friendly and machine-readable formats. A searchable catalog of references allows organizations to cross-reference their actions with the CSF, linking to over 50 other cybersecurity documents – facilitating comprehensive risk management. The Cybersecurity and Privacy Reference Tool (CPRT) contextualizes NIST resources with other popular references, facilitating communication across all levels of an organization.

        NIST aims to continually enhance CSF resources based on community feedback, encouraging users to share their experiences to improve collective understanding and management of cybersecurity risk. The CSF’s international adoption is significant, with translations of previous versions into 13 languages. NIST expects CSF 2.0 to follow suit, further expanding its global reach. NIST’s collaboration with ISO/IEC aligns cybersecurity frameworks internationally, enabling organizations to utilize CSF functions in conjunction with ISO/IEC resources for comprehensive cybersecurity management.

        Resources:

        1. NIST CSF 2.0 Fact Sheet.
        2. NIST CSF 2.0 PDF.
        3. NIST CSF 2.0 Reference Tool.
        4. NIST CSF 2.0 YouTube Breakdown.

        About the Author:

        Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

        Key Artificial Intelligence (AI) Cyber-Tech Trends and What it Means for the Future.

        Minneapolis –

        #cryptonews #cyberrisk #techrisk #techinnovation #techyearinreview #infosec #musktwitter #disinformation #cio #ciso #cto #chatgpt #openai #airisk #iam #rbac #artificialintelligence #samaltman #aiethics #nistai #futurereadybusiness #futureofai

        By Jeremy Swenson & Matthew Versaggi

        Fig. 1. Quantum ChatGPT Growth Plus NIST AI Risk Management Framework Mashup [1], [2], [3].

        Summary:

        This year is unique since policy makers and business leaders grew concerned with artificial intelligence (AI) ethics, disinformation morphed, AI had hyper growth including connections to increased crypto money laundering via splitting / mixing. Impressively, AI cyber tools become more capable in the areas of zero-trust orchestration, cloud security posture management (CSPM), threat response via improved machine learning, quantum-safe cryptography ripened, authentication made real time monitoring advancements, while some hype remains. Moreover, the mass resignation / gig economy (remote work) remained a large part of the catalyst for all of these trends.

        Introduction:

        Every year we like to research and comment on the most impactful security technology and business happenings from the prior year. This year is unique since policy makers and business leaders grew concerned with artificial intelligence (AI) ethics [4], disinformation morphed, AI had hyper growth [5], crypto money laundering via splitting / mixing grew [6], AI cyber tools became more capable – while the mass resignation / gig economy remained a large part of the catalyst for all of these trends. By August 2023 ChatGPT reached 1.43 billion website visits per month and about 180.5 million registered users [7]. This even attracted many non-technical naysayers. Impressively, the platform was only nine months old then and just turned a year old in November [8]. These numbers for AI tools like ChatGPT are going to continue to grow in many sectors at exponential rates. As a result, the below trends and considerations are likely to significantly impact government, education, high-tech, startups, and large enterprises in big and small ways, albeit with some surprises.

        1. The Complex Ethics of Artificial Intelligence (AI) Swarms Policy Makers and Industry Resulting in New Frameworks:

        The ethical use of artificial intelligence (AI) as a conceptual and increasingly practical dilemma has gained a lot of media attention and research in the last few years by those in philosophy (ethics, privacy), politics (public policy), academia (concepts and principles), and economics (trade policy and patents) – all who have weighed in heavily. As a result, we find this space is beginning to mature. Sovereign nations (The USA, EU, and elsewhere globally) have developed and socialized ethical policies and frameworks [9], [10]. While major corporations motivated by profit are all devising their own ethical vehicles and structures – often taking a legalistic view first [11]. Moreover, The World Economic Forum (WEF) has weighed in on this matter in collaboration with PricewaterhouseCoopers (PWC) [12]. All of this contributes to the accelerated pace of maturity of this area in general. The result is the establishment of shared conceptual viewpoints, early-stage security frameworks, accepted policies, guidelines, and governance structures to support the evolution of artificial intelligence (AI) in ethical ways.

        For example, the Department of Defense (DOD) has formally adopted five principles for the ethical development of artificial intelligence capabilities as follows [13]:

        1. Responsible
        2. Equitable
        3. Traceable
        4. Reliable
        5. Governable

        Traceable and governable seem to be the most clear and important principles, while equitable and responsible seem gray at best and they could be deemphasized in a heightened war time context. The latter two echo the corporate social responsibility (CSR) efforts found more often in the private sector.

        The WEF via PWC has issued its Nine AI Ethical Principles for organizations to follow [14], and The Office of the Director of National Intelligence (ODNI) has released their Framework for AI Ethics [15]. Importantly, The National Institute For Standards in Technology (NIST) has released their AI Risk Management Framework as outlined in Fig. 2. and 3. They also released a playbook to support its implementation and have hosted several working sessions discussing it with industry which we attended virtually [16]. It seems the mapping aspect could take you down many AI rabbit holes, some unforeseen – inferring complex risk. Mapping also impacts how you measure and manage. None of this is fully clear and much of it will change as ethical AI governance matures.

        Fig. 2. NIST AI Risk Management Framework (AI RMF) 1.0 [17].

        Fig. 3. NIST AI Risk Management Framework: Actors Across AI Lifecycle Stages (AI RMF) 1.0 [18].

        The actors in Fig. 3. cover a wide swath of spaces where artificial intelligence (AI) plays, and appropriately so as AI is considered a GPT (general purpose technology) like electricity, rubber, and the like – where it can be applied ubiquitously in our lives [19]. This infers cognitive technology, digital reality, ambient experiences, autonomous vehicles and drones, quantum computing, distributed ledgers, and robotics to name a few. These were all prior to the emergence of generative AI on the scene which will likely put these vehicles to the test much earlier than expected. Yet all of these can be mapped across the AI lifecycle stages in Fig. 3. to clarify the activities, actors, dimensions, and if it gets to build, then more scrutiny will need to be applied.

        Scrutiny can come in the form of DevSecOps but that is extremely hard to do with such exponentially massive AI code datasets required by the learning models, at least at this point. Moreover, we are not sure if any AI ethics framework does justice to quality assurance (QA) and secure coding best practices much at this point. However, the above two NIST figures at least clarify relationships, flows, inputs and outputs, but all of this will need to be greatly customized to an organization to have any teeth. We imagine those use cases will come out of future NIST working sessions with industry.

        Lastly, the most crucial factor in AI ethics governance is what Fig. 3. calls “People and Planet”. This is because the people and planet can experience the negative aspects of AI in ways the designers did not imagine, and that feedback is valuable to product governance to prevent bigger AI disasters. For example, AI taking control of the air traffic control system and causing reroutes or accidents, or AI malware spreading faster than antivirus products can defend it creating a cyber pandemic. Thus, making sure bias is reduced and safety increased (DOD five AI principles) is key but certainly not easy or clear.

        2. ChatGPT and Other Artificial Intelligence (AI) Tools Have Huge Security Risks:

        It is fair to start off discussing the risks posed by ChatGPT and related tools to balance out all the positive feature coverage in the media and popular culture in recent months. First of all, with artificial intelligence (AI), every cyber threat actor has a new tool to better send spam, steal data, spread malware, build misinformation mills, grow botnets, launder cryptocurrency through shady exchanges [20], create fake profiles on multiple platforms, create fake romance chatbots, and to build the most complex self-replicating malware that will be akin to zero-day exploits much of the time.

        One commentator described it this way in his well circulated LinkedIn article, “It can potentially be a formidable social engineering and phishing weapon where non-native speakers can create flawlessly written phishing emails. Also, it will be much simpler for all scammers to mimic their intended victim’s tone, word choice, and writing style, making it more difficult than ever for recipients to tell the difference between a genuine and fraudulent email” [21]. Think of MailChimp on steroids with a sophisticated AI team crafting millions and billions of phishing e-mails / texts customized to impressively realistic details including phone calls with fake voices that mimic your loved ones building fake corroboration [22].

        SAP’s Head of Cybersecurity Market Strategy, Gabriele Fiata, took the words out of our mouths when he described it this way, “The threat landscape surrounding artificial intelligence (AI) is expanding at an alarming rate. Between January to February 2023, Darktrace researchers have observed a 135% increase in “novel social engineering” attacks, corresponding with the widespread adoption of ChatGPT” [23]. This is just the beginning. More malware as a service propagation, fake bank sites, travel scams, and fake IT support centers will multiply to scam and extort the weak including, elders, schools, local government, and small businesses. Then there is the increased likelihood that antivirus and data loss prevention (DLP) tools will become less effective as AI morphs. Lastly, cyber criminals can and will use generative AI for advanced evidence tampering by creating fake content to confuse or dirty the chain of custody, lessen reliability, or outright frame the wrong actor – while the government is confused and behind the tech sector. It is truly a digital arms race.

        Fig. 4. ChatGPT Exploit Risk Infographic [24].

        In the next section we will discuss the possibilities of how artificial intelligence (AI) can enhance information security increasing compliance, reducing risk, enabling new features of great value, and enabling application orchestration for threat visibility.

        3. The Zero-Trust Security Model Becomes More Orchestrated via Artificial Intelligence (AI):

        The zero-trust model assumes that no user or system, even those within the corporate network, should be trusted by default. Access controls are strictly enforced, and continuous verification is performed to ensure the legitimacy of users and devices. Zero-trust moves organizations to a need-to-know-only access mindset (least privilege) with inherent deny rules, all the while assuming you are compromised. This infers single sign-on at the personal device level and improved multifactor authentication. It also infers better role-based access controls (RBAC), firewalled networks, improved need-to-know policies, effective whitelisting and blacklisting of applications, group membership reviews, and state of the art privileged access management (PAM) tools. Password check out and vaulting tools like CyberArk will improve to better inform toxic combination monitoring and reporting. There is still work in selecting / building the right tech components that fit into (not work against) the infrastructure orchestra stack. However, we believe rapid build and deploy AI based custom middleware can alleviate security orchestration mismatches in many cases easily. All of this is likely to better automate and orchestrate zero-trust abilities so that one part does not hinder another part via complexity fog.

        4. Artificial Intelligence (AI) Powered Threat Detection Has Improved Analytics:

        Artificial intelligence (AI) is increasingly being used to enhance threat detection capabilities. Machine learning algorithms analyze vast amounts of data to identify patterns indicative of potential security threats. This enables quicker and more accurate identification of malicious activities. Security information and event management (SIEM) systems enhanced with improved machine learning algorithms can detect anomalies in network traffic, application logs, and data flow – helping organizations identify potential security incidents faster.

        There will be reduced false positives which has been a sustained issue in the past with large overconfident companies repeatedly wasting millions of dollars per year fine tuning useless data security lakes (we have seen this) that mostly produce garbage anomaly detection reports [25], [26]. Literally the kind good artificial intelligence (AI) laughs at – we are getting there. All the while, the technology vendors try to solve this via better SIEM functionality for an increased price at present. Yet we expect prices to drop really low as the automation matures.  

        With improved natural language processing (NLP) techniques, artificial intelligence (AI) systems can analyze unstructured data sources, such as social media feeds, photos, videos, and news articles – to assemble useful threat intelligence. This ability to process and understand textual data empowers organizations to stay informed about indicators of compromise (IOCs) and new attack tactics. Vendors that provide these services include Dark Trace, IBM, CrowdStrike, and many startups will likely join soon. This space is wide open and the biases of the past need to be forgotten if we want innovation. Young fresh minds who know web 3.0 are valuable here. Thus, in the future more companies will likely not have to buy but rather can build their own customized threat detection tools informed by advancements in AI platform technology.

        5. Quantum-Safe Cryptography Ripens:

        Quantum computing is a quickly evolving technology that uses the laws of quantum mechanics to solve problems too complex for traditional computers, like superposition and quantum interference [27]. Some cases where quantum computers can provide a speed boost include simulation of physical systems, machine learning (ML), optimization, and more. Traditional cryptographic algorithms could be vulnerable because they were built and coded with weaker technologies that have solvable patterns, at least in many cases. “Industry experts generally agree that within 7-10 years, a large-scale quantum computer may exist that can run Shor’s algorithm and break current public-key cryptography causing widespread vulnerabilities” [28]. Quantum-safe or quantum-resistant cryptography is designed to withstand attacks from quantum computers, often artificial intelligence (AI) assisted – ensuring the long-term security of sensitive data. For example, AI can help enhance post-quantum cryptographic algorithms such as lattice-based cryptography or hash-based cryptography to secure communications [29]. Lattice-based cryptography is a cryptographic system based on the mathematical concept of a lattice. In a lattice, lines connect points to form a geometric structure or grid (Fig. 5).

        Fig. 5. Simple Lattice Cryptography Grid [30].


        This geometric lattice structure encodes and decodes messages. Although it looks finite, the grid is not finite in any way. Rather, it represents a pattern that continues into the infinite (Fig. 6).

        Fig. 6. Complex Lattice Cryptography Grid [31].

        Lattice based cryptography benefits sensitive and highly targeted assets like large data centers, utilities, banks, hospitals, and government infrastructure generally. In other words, there will likely be mass adoption of quantum computing based encryption for better security. Lastly, we used ChatGPT as an assistant to compile the below specific benefits of quantum cryptography albeit with some manual corrections [32]:

        1. Detection of Eavesdropping:
          Quantum key distribution protocols can detect the presence of an eavesdropper by the disturbance introduced during the quantum measurement process, providing a level of security beyond traditional cryptography.
        2. Quantum-Safe Against Future Computers:
          Quantum computers have the potential to break many traditional cryptographic systems. Quantum cryptography is considered quantum-safe, as it relies on the fundamental principles of quantum mechanics rather than mathematical complexity.
        3. Near Unconditional Security:
          Quantum cryptography provides near unconditional security based on the principles of quantum mechanics. Any attempt to intercept or measure the quantum state will disturb the system, and this disturbance can be detected. Note that ChatGPT wrongly said “unconditional Security” and we corrected to “near unconditional security” as that is more realistic.

        6. Artificial Intelligence (AI) Driven Threat Response Ability Advances:

        Artificial intelligence (AI) is used not only for threat detection but also in automating response actions [33]. This can include automatically isolating compromised systems, blocking malicious internet protocol (IP) addresses, closing firewalls, or orchestrating a coordinated response to a cyber incident – all for less money. Security orchestration, automation, and response (SOAR) platforms leverage AI to analyze and respond to security incidents, allowing security teams to automate routine tasks and respond more rapidly to emerging threats. Microsoft Sentinel, Rapid7 InsightConnect, and FortiSOAR are just a few of the current examples. Basically, AI tools will help SOAR tools mature so security operations centers (SOCs) can catch the low hanging fruit; thus, they will have more time for analysis of more complex threats. These AI tools will employ the observe, orient, decide, act (OODA) Loop methodology [34]. This will allow them to stay up to date, customized, and informed of many zero-day exploits. At the same time, threat actors will constantly try to avert this with the same AI but with no governance.

        7. Artificial Intelligence (AI) Streamlines Cloud Security Posture Management (CSPM):

        As organizations increasingly migrate to cloud environments, ensuring the security of cloud assets becomes key. Vendors like Microsoft, Oracle, and Amazon Web Services (AWS) lead this space; yet large organizations have their own clouds for control as well. Cloud security posture management (CSPM) tools help organizations manage and secure their cloud infrastructure by continuously monitoring configurations and detecting misconfigurations that could lead to vulnerabilities [35]. These tools automatically assess cloud configurations for compliance with security best practices. This includes ensuring that only necessary ports are open, and that encryption is properly configured. “Keeping data safe in the cloud requires a layered defense that gives organizations clear visibility into the state of their data. This includes enabling organizations to monitor how each storage bucket is configured across all their storage services to ensure their data is not inadvertently exposed to unauthorized applications or users” [36]. This has considerations at both the cloud user and provider level especially considering artificial intelligence (AI) applications can be built and run inside the cloud for a variety of reasons. Importantly, these build designs often use approved plug ins from different vendors making it all the more complex.

        8. Artificial Intelligence (AI) Enhanced Authentication Arrives:

        Artificial intelligence (AI) is being utilized to strengthen user authentication methods. Behavioral biometrics, such as analyzing typing patterns, mouse movements and ram usage, can add an extra layer of security by recognizing the unique behavior of legitimate users. Systems that use AI to analyze user behavior can detect and flag suspicious activity, such as an unauthorized user attempting to access an account or escalate a privilege [37]. Two factor authentication remains the bare standard with many leading identity and access management (IAM) application makers including Okta, SailPoint, and Google experimenting with AI for improved analytics and functionality. Both two factor and multifactor authentication benefit from AI advancements with machine learning via real time access rights reassignment and improved role groupings [38]. However, multifactor remains stronger at this point because it includes something you are, biometrics. The jury is out on which method will remain the security leader because biometrics can be faked by AI [39]. Importantly, AI tools can remove fake accounts or orphaned accounts much more quickly, reducing risk. However, it likely will not get it right 100% of the time so there is a slight inconvenience.

        Conclusion and Recommendations:

        Artificial intelligence (AI) remains a leading catalyst for digital transformation in tech automation, identity and access management (IAM), big data analytics, technology orchestration, and collaboration tools. AI based quantum computing serves to bolster encryption when old methods are replaced. All of the government actions to incubate ethics in AI are a good start and the NIST AI Risk Management Framework (AI RMF) 1.0 is long overdue. It will likely be tweaked based on private sector feedback. However, adding the DOD five principles for the ethical development of AI to the NIST AI RMF could derive better synergies. This approach should be used by the private sector and academia in customized ways. AI product ethical deviations should be thought of as quality control and compliance issues and remediated immediately.

        Organizations should consider forming an AI governance committee to make sure this unique risk is not overlooked or overly merged with traditional web / IT risk. ChatGPT is a good encyclopedia and a cool Boolean search tool, yet it got some things wrong about quantum computing in this article for which we cited and corrected. The Simplified AI text to graphics generator was cool and useful but it needed some manual edits as well. Both of these generative AI tools will likely get better with time.

        Artificial intelligence (AI) will spur many mobile malware and ransomware variants faster than Apple and Google can block them. This in conjunction with the fact that people more often have no mobile antivirus on their smart phone even if they have it on their personal and work computers, and a culture of happy go lucky application downloading makes it all the worse. As a result, more breaches should be expected via smart phones / watches / eyeglasses from AI enabled threats.

        Therefore, education and awareness around the review and removal of non-essential mobile applications is a top priority. Especially for mobile devices used separately or jointly for work purposes. Containerization is required via a mobile device management (MDM) tool such as JAMF, Hexnode, VMWare, or Citrix Endpoint Management. A bring your own device (BYOD) policy needs to be written, followed, and updated often informed by need-to-know and role-based access (RBAC) principles. This requires a better understanding of geolocation, QR code scanning, couponing, digital signage, in-text ads, micropayments, Bluetooth, geofencing, e-readers, HTML5, etc. Organizations should consider forming a mobile ecosystem security committee to make sure this unique risk is not overlooked or overly merged with traditional web / IT risk. Mapping the mobile ecosystem components in detail is a must including the AI touch points.

        The growth and acceptability of mass work from home (WFH) combined with the mass resignation / gig economy remind employers that great pay and culture alone are not enough to keep top talent. At this point AI only takes away some simple jobs but creates AI support jobs, yet the percents of this are not clear this early. Signing bonuses and personalized treatment are likely needed for those with top talent. We no longer have the same office and thus less badge access is needed. Single sign-on (SSO) will likely expand to personal devices (BYOD) and smart phones / watches / eyeglasses. Geolocation-based authentication is here to stay with double biometrics, likely fingerprint, eye scan, typing patterns, and facial recognition. The security perimeter remains more defined by data analytics than physical / digital boundaries, and we should dashboard this with machine learning tools as the use cases evolve.

        Cloud infrastructure will continue to grow fast creating perimeter and compliance complexity / fog. Organizations should preconfigure artificial intelligence (AI) based cloud-scale options and spend more on cloud-trained staff. They should also make sure that they are selecting more than two or three cloud providers, all separate from one another. This helps staff get cross-trained on different cloud platforms and plug in applications. It also mitigates risk and makes vendors bid more competitively. There is huge potential for AI synergies with Cloud Security Posture Management (CSPM) tools, and threat response tools – experimentation will likely yield future dividends. Organization should not be passive and stuck in old paradigms. The older generations should seek to learn from the younger generations without bias. Also, comprehensive logging is a must for AI tools.

        In regard to cryptocurrency, non-fungible tokens (NFTs), initial coin offerings (ICOs), and related exchanges – artificial intelligence (AI) will be used by crypto scammers and those seeking to launder money. Watch out for scammers who make big claims without details, no white papers or filings, or explanations at all. No matter what the investment, find out how it works and ask questions about where your money is going. Honest investment managers and advisors want to share that information and will back it up with details in many documents and filings [40]. Moreover, better blacklisting by crypto exchanges and banks is needed to stop these illicit transactions erroring far on the side of compliance. This requires us to pay more attention to knowing and monitoring our own social media baselines – emerging AI data analytics can help here. If you are for and use crypto mixer and / or splitter services then you run the risk of having your digital assets mixed with dirty digital assets, you have high fees, you have zero customer service, no regulatory protection, no decent Terms of Service and / or Privacy Policy if any, and you have no guarantee that it will even work the way you think it will.

        As security professionals, we are patriots and defenders of wherever we live and work. We need to know what our social media baseline is across platforms. IT and security professionals need to realize that alleviating disinformation is about security before politics. We should not be afraid to talk about this because if we are, then our organizations will stay weak and outdated and we will be plied by the same artificial intelligence (AI) generated political bias that we fear confronting. More social media training is needed as many security professionals still think it is mostly an external marketing thing.

        It’s best to assume AI tools are reading all social media posts and all other available articles, including this article which we entered into ChatGPT for feedback. It was slightly helpful pointing out other considerations. Public-to-private partnerships (InfraGard) need to improve and application to application permissions need to be more scrutinized. Everyone does not need to be a journalist, but everyone can have the common sense to identify AI / malware-inspired fake news. We must report undue AI bias in big tech from an IT, compliance, media, and a security perspective. We must also resist the temptation to jump on the AI hype bandwagon but rather should evaluate each tool and use case based on the real-world business outcomes for the foreseeable future.

        About the Authors:

        Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist / researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

        Matthew Versaggi is a senior leader in artificial intelligence with large company healthcare experience who has seen hundreds of use-cases. He is a distinguished engineer, built an organization’s “College of Artificial Intelligence”, introduced and matured both cognitive AI technology and quantum computing, has been awarded multiple patents, is an experienced public speaker, entrepreneur, strategist and mentor, and has international business experience. He has an MBA in international business and economics and a MS in artificial intelligence from DePaul University, has a BS in finance and MIS and a BA in computer science from Alfred University. Lastly, he has nearly a dozen professional certificates in AI that are split between the AI, technology, and business strategy.

        References:


        [1] Swenson, Jeremy, and NIST; Mashup 12/15/2023; “Artificial Intelligence Risk Management Framework (AI RMF 1.0)”. 01/26/23: https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf.

        [2] Swenson, Jeremy, and Simplified AI; AI Text to graphics generator. 01/08/24: https://app.simplified.com/

        [3] Swenson, Jeremy, and ChatGPT; ChatGPT Logo Mashup. OpenAI. 12/15/23: https://chat.openai.com/auth/login

        [4] The White House; “Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.”    10/30/23: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ 

        [5] Nerdynav; “107 Up-to-Date ChatGPT Statistics & User Numbers [Dec 2023].” 12/06/23: https://nerdynav.com/chatgpt-statistics/

        [6] Sun, Zhiyuan; “Two individuals indicted for $25M AI crypto trading scam: DOJ.” Cointelegraph. 12/12/23: https://cointelegraph.com/news/two-individuals-indicted-25m-ai-artificial-intelligence-crypto-trading-scam

        [7] Nerdynav; “107 Up-to-Date ChatGPT Statistics & User Numbers [Dec 2023].” 12/06/23: https://nerdynav.com/chatgpt-statistics/

        [8] Nerdynav; “107 Up-to-Date ChatGPT Statistics & User Numbers [Dec 2023].” 12/06/23: https://nerdynav.com/chatgpt-statistics/

        [9] The White House; “Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.”    10/30/23: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ 

        [10] EU. “EU AI Act: first regulation on artificial intelligence.” 12/19/23: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

        [11] Jackson, Amber; “Top 10 companies with ethical AI practices.” AI Magazine. 07/12/23: https://aimagazine.com/ai-strategy/top-10-companies-with-ethical-ai-practices

        [12] Golbin, Ilana, and Axente, Maria Luciana; “9 ethical AI principles for organizations to follow.” World Economic Forum and PricewaterhouseCoopers (PWC). 06/23/21 https://www.weforum.org/agenda/2021/06/ethical-principles-for-ai/

        [13] Lopez, Todd C; “DOD Adopts 5 Principles of Artificial Intelligence Ethics”. DOD News. 02/25/20: https://www.defense.gov/News/News-Stories/article/article/2094085/dod-adopts-5-principles-of-artificial-intelligence-ethics/

        [14] Golbin, Ilana, and Axente, Maria Luciana; “9 ethical AI principles for organizations to follow.” World Economic Forum and PricewaterhouseCoopers (PWC). 06/23/21 https://www.weforum.org/agenda/2021/06/ethical-principles-for-ai/

        [15] The Office of the Director of National Intelligence. “Principles of Artificial Intelligence Ethics for the Intelligence Community.” 07/23/20: https://www.dni.gov/index.php/newsroom/press-releases/press-releases-2020/3468-intelligence-community-releases-artificial-intelligence-principles-and-framework#:~:text=The%20Principles%20of%20AI%20Ethics,resilient%20by%20design%2C%20and%20incorporate

        [16] NIST; “NIST AI RMF Playbook.” 01/26/23: https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook

        [17] NIST; “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” 01/26/23: https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf

        [18] NIST; “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” 01/26/23: https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf

        [19] Crafts, Nicholas; “Artificial intelligence as a general-purpose technology: an historical perspective.” Oxford Review of Economic Policy. Volume 37, Issue 3, Autumn 2021: https://academic.oup.com/oxrep/article/37/3/521/6374675

        [20] Sun, Zhiyuan; “Two individuals indicted for $25M AI crypto trading scam: DOJ.” Cointelegraph. 12/12/23: https://cointelegraph.com/news/two-individuals-indicted-25m-ai-artificial-intelligence-crypto-trading-scam

        [21] Patel, Pranav; “ChatGPT brings forth new opportunities and challenges to the Cybersecurity industry.” LinkedIn Pulse. 04/03/23: https://www.linkedin.com/pulse/chatgpt-brings-forth-new-opportunities-challenges-industry-patel/

        [22] FTC; “Preventing the Harms of AI-enabled Voice Cloning.” 11/16/23: https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/11/preventing-harms-ai-enabled-voice-cloning

        [23] Fiata, Gabriele; “Why Evolving AI Threats Need AI-Powered Cybersecurity.” Forbes. 10/04/23: https://www.forbes.com/sites/sap/2023/10/04/why-evolving-ai-threats-need-ai-powered-cybersecurity/?sh=161bd78b72ed

        [24] Patel, Pranav; “ChatGPT brings forth new opportunities and challenges to the Cybersecurity industry.” LinkedIn Pulse. 04/03/23: https://www.linkedin.com/pulse/chatgpt-brings-forth-new-opportunities-challenges-industry-patel/

        [25] Tobin, Donal; “What Challenges Are Hindering the Success of Your Data Lake Initiative?” Integrate.io. 10/05/22: https://www.integrate.io/blog/data-lake-initiative/

        [26] Chuvakin, Anton; “Why Your Security Data Lake Project Will … Well, Actually …” Medium. 10/22/22. https://medium.com/anton-on-security/why-your-security-data-lake-project-will-well-actually-78e0e360c292

        [27] Amazon Web Services; “What are the types of quantum technology?” 01/07/23: https://aws.amazon.com/what-is/quantum-computing/ 

        [28] ISARA Corporation; “What is Quantum-safe Cryptography?” 2023: https://www.isara.com/resources/what-is-quantum-safe.html

        [29] Swenson, Jeremy, and ChatGPT; OpenAI. 12/15/23: https://chat.openai.com/auth/login

        [30] Utimaco; “What is Lattice-based Cryptography? 2023: https://utimaco.com/service/knowledge-base/post-quantum-cryptography/what-lattice-based-cryptography

        [31] D. Bernstein, and T. Lange; “Post-quantum cryptography – dealing with the fallout of physics success.” IACR Cryptology. 2017: https://www.semanticscholar.org/paper/Post-quantum-cryptography-dealing-with-the-fallout-Bernstein-Lange/a515aad9132a52b12a46f9a9e7ca2b02951c5b82

        [32] Swenson, Jeremy, and ChatGPT; OpenAI. 12/15/23: https://chat.openai.com/auth/login

        [33] Sibanda, Isla; “AI and Machine Learning: The Double-Edged Sword in Cybersecurity.” RSA Conference. 12/13/23: https://www.rsaconference.com/library/blog/ai-and-machine-learning-the-double-edged-sword-in-cybersecurity

        [34] Michael, Katina, Abbas, Roba, and Roussos, George; “AI in Cybersecurity: The Paradox.” IEEE Transactions on Technology and Society. Vol. 4, no. 2: pg. 104-109. 2023: https://ieeexplore.ieee.org/abstract/document/10153442

        [35] Microsoft; “What is CSPM?” 01/07/24: https://www.microsoft.com/en-us/security/business/security-101/what-is-cspm 

        [36] Rosencrance, Linda; “How to choose the best cloud security posture management tools.” CSO Online. 10/30/23: https://www.csoonline.com/article/657138/how-to-choose-the-best-cloud-security-posture-management-tools.html

        [37] Muneer, Salman Muneer, Muhammad Bux Alvi, and Amina Farrakh; “Cyber Security Event Detection Using Machine Learning Technique.” International Journal of Computational and Innovative Sciences. Vol. 2, no (2): pg. 42-46. 2023: https://ijcis.com/index.php/IJCIS/article/view/65.

        [38] Azhar, Ishaq; “Identity Management Capability Powered by Artificial Intelligence to Transform the Way User Access Privileges Are Managed, Monitored and Controlled.” International Journal of Creative Research Thoughts (IJCRT), ISSN:2320-2882, Vol. 9, Issue 1: pg. 4719-4723. January 2021: https://ssrn.com/abstract=3905119

        [39] FTC; “Preventing the Harms of AI-enabled Voice Cloning.” 11/16/23: https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/11/preventing-harms-ai-enabled-voice-cloning

        [40] FTC; “What To Know About Cryptocurrency and Scams.” May 2022: https://consumer.ftc.gov/articles/what-know-about-cryptocurrency-and-scams

        Esports Cyber Threats and Mitigations

        Esports Cyber Threats and Mitigations:

        On 06/10/21 major Esports software company, Electronic Arts (EA) was hacked. They are one of the biggest esports companies in the world. They count many major hit games including Battlefield, The Sims, Titanfall, and Star Wars: Jedi Fallen Order, in addition to many online league sports games; and they develop and/or publish many others. An EA spokesperson described game code and related tools as stolen in the hack and that they are still investigating the privacy implications. Early reports however indicated that a whopping 780GB of data was stolen (Balaji N, GBHackers On Security, 06/12/21).

        Fig 1. EA Sports Hacked Image. Balaji N, GBHackers On Security, 06/12/21.

        Given this recent hack here is an updated overview of some of the esports cyber threats and mitigations.

        Threats:

        1. Aimbots and Wallhacks

        As esports revenues and player prizes increase, more players will look for opportunities to exploit the game to gain an advantage over competitors. Many underground hacker forums reveal hundreds of aimbots and wallhacks. Prices for such tools start as low as $5.00 but go as high as $2,000. These are essentially cheat tools for sale but they are technically prohibited in official competitions (Trend Micro, 2019).

        Aimbots are a type of software used in multiplayer first-person shooter games to provide varying levels of automated targeting that gives the user an advantage over other players. Wallhacks allow the player to change the properties of in-game walls by making them transparent or nonsolid, making it easier to find or attack enemies.

        Fig 2. Wallhack Cheat For WarZone (May 6th 2020, Tom Warren).

        No alt text provided for this image
        Fig 2. Wallhack Cheat For WarZone (May 6th 2020, Tom Warren).

        2. Hidden Hardware Hacks

        Some of the hardware used in competitions can be manipulated by hackers with ease. For each tournament, a gaming board sets the rules on what equipment they allow tournament participants to use. A lot of professional tournaments allow players to bring their own mouse and keyboard, which have been known to house hacks.

        Case in point, in 2018 a Dota 2 team was disqualified from a $15 million tournament after judges caught one of its members using a programmable mouse – the Synapse 3 configuration tool. The mouse allowed the player to perform movements that would be impossible without macros, a shortcut of preset key sequences not possible with standard nonprogrammable hardware (Trend Micro, 2019).

        3. Stolen Accounts and Credentials

        Threat actors have been increasingly targeting the esports industry. They do this by harvesting and selling user ID and password data of both internal and external systems for esports companies. A study by threat intelligence company KELA indicated that more than half a million login credentials tied to the employees of 25 leading game publishers have been found for sale on dark web bazaars (Amer Owaida, Welivewellsecurity, 01/05/2021).

        4. Ransomware and DDoS (Distributed Denial of Services) Attacks

        Ransomware can come via phishing, smishing, spam, or via free compromised plug-ins. When installed on the gaming platform they lock everything up and force the host to pay ransom in the form of difficult-to-trace digital currency like Bitcoin. Interestingly, researcher Danny Palmer of ZDnet cited Trend Micro’s research when he described the marriage of ransomware and DDoS attacks as follows:

        “Researchers also warn that attackers could blackmail esports tournament organizers, demanding a ransom payment in exchange for not launching a DDoS attack – something which organizers might consider given how events are broadcast live and the reputational damage that will occur to the host organizer if the event gets taken offline” (Danny Palmer, ZDnet, 10/29/2019).

        Mitigations:

        1. Use a VPN (Virtual Private Network)

        VPN establishes an encrypted tunnel between you and a remote server ran by the VPN provider. All your internet traffic is run through this tunnel, so your data is secure from eavesdropping. Your real IP address and location is masked preventing IPS tracking as your traffic is exiting the VPN server. You can also more confidently use public WIFI with a VPN.

        2. Use A Password Management Tool and Strong Passwords

        Another way to stay safe is by setting passwords that are longer, complex, and thus hard to guess. Additionally, they can be stored and encrypted for safekeeping using a well-regarded password vault and management tool. This tool can also help you to set strong passwords and can auto-fill them with each login — if you select that option. Yet using just the password vaulting tool is all that is recommended. Doing these two things makes it difficult for hackers to steal passwords or access your gaming accounts.

        3. Use Only Whitelisted Gaming Sites Not Blacklisted Ones or Ones Found Via the Dark Web

        Use only approved whitelisted gaming platforms and sites that do not expose you to data leakages or intrusion on your privacy. Whitelisting is the practice of explicitly allowing some identified websites access to a particular privilege, service, or access. Blacklisting is blocking certain sites or privileges. If a site does not assure your privacy, do not even sign up let alone participate.

        Abstract Forward Consulting Now Open For Business!

        AbstractFwdHzTag300

        In 2016 Mr. Swenson decided to go back to graduate school to pursue a second masters degree in Security Technologies at the University of MN’s renowned Technological Leadership Institute to position himself to launch a technology leadership consulting firm. This degree was completed in 2017 and positions Swenson as a creative and security savvy Sr. consultant to CIOs, CTOs, CEOs, and other business line leaders. His capstone was on “pre-cursor detection of data exfiltration” and included input from many of the regions CIOs, CISOs, CEOs, and state government leaders. His capstone advisor was technology and security pioneer Brian Isle of Adventium Labs.

        Over 14 years, Mr. Swenson had the honor and privilege of consulting at 10 organizations in 7 industries on progressively complex and difficult problems in I.T. including: security, proj. mgmt., business analysis, data archival and governance, audit, web application launch and decommission, strategy, information security, data loss prevention, communication, and even board of directors governance. From governments, banks, insurance companies, minority-owned small businesses, marketing companies, technology companies, and healthcare companies, he has a wealth of abstract experience backed up by the knowledge from his 4 degrees and validated by his 40,000 followers (from LinkedIn, Twitter, and his blog). Impressively, the results are double-digit risk reductions, huge vetted process improvements, and $25+ million on average or more in savings per project!

        As the desire for his contract consulting work has increased, he has continued to write and speak on how to achieve such great results. Often, he has been called upon to explain his process and style to organizations and people. While most accept it and get on board fast, some aren’t ready, mostly because they are stuck in the past and are afraid to admit their own errors due to confirmation bias. Two great technology leaders, Steve Jobs (Apple) and Carly Fiorina (HP) often described how doing things differently would have its detractors. Yet that is exactly why there is a need for Abstract Forward Consulting.

        With the wind at our backs, we will press on because the world requires better results and we have higher standards (if you want to know more reach out below). With a heart to serve many organizations and people, we have synergized a hybrid blend of this process and experience to form a new consulting firm, one that puts abstract thinking first to reduce risk, improve security, and enhance business technology.

        Proudly announcing: Abstract Forward Consulting, LLC.

        Company Mission Statement: We use abstract thinking on security, risk, and technology problems to move business forward!

        Company Vision: To be the premier provider of technology and security consulting services while making the world a better and safer place.

        Main service offerings for I.T. and business leaders:

        1) Management Consulting

        2) Cyber Security Consulting

        3) Risk Management Consulting

        4) Data Governance Consulting

        5) Enterprise Collaboration Tools Consulting

        6) Process Improvement Consulting

        If you want to have a free exploratory conversation on how we can help your organization please contact us here or inbox me. As our business grows, we will announce more people and tactics to build a tidal wave to make your organization the best it can be!

        Thanks to the community for your support!

        Founder and CEO: Abstract Forward Consulting, LLC.

        Jeremy Swenson, MBA MSST (Master of Science In Security Technologies)