DeepSeek R1: A New Chapter in Global AI Realignment

Fig. 1. DeepSeek and Global AI Change Infographic, Jeremy Swenson, 2025.

Minneapolis—

DeepSeek, the Chinese artificial intelligence company founded by Liang Wenfeng and backed by High-Flyer, has continued to redefine the AI landscape since the explosive launch of its R1 model in late January 2025. Emerging from a background in quantitative trading and rapidly evolving into a pioneer in open-source LLMs, DeepSeek now stands as a formidable competitor to established systems like OpenAI’s ChatGPT and Microsoft’s proprietary models available on Azure AI. This article provides an expanded analysis of DeepSeek R1’s technical innovations, detailed comparisons with ChatGPT and Microsoft Azure AI offerings, and the broader economic, cybersecurity, and geopolitical implications of its emergence.


Technical Innovations and Architectural Advances:

Novel Training Methodologies DeepSeek R1 leverages a cutting-edge combination of pure reinforcement learning and chain-of-thought prompting to achieve human-like reasoning in tasks such as advanced mathematics and code generation. Unlike traditional LLMs that rely heavily on supervised fine-tuning, DeepSeek’s R1 is engineered to autonomously refine its reasoning steps, resulting in greater clarity and efficiency. In early benchmarking tests, R1 demonstrated the ability to solve multi-step arithmetic problems in approximately three minutes—substantially faster than ChatGPT’s o1 model, which typically required five minutes (Sayegh, 2025).

Cloud Integration and Open-Source Deployment One of R1’s key strengths lies in its open-source availability under an MIT license, a stark contrast to the closed ecosystems of its Western counterparts. Major cloud platforms have rapidly integrated R1: Amazon has deployed it via the Bedrock Marketplace and SageMaker, and Microsoft has incorporated it into its Azure AI Foundry and GitHub model catalog. This wide accessibility not only allows for extensive external scrutiny and customization but also enables enterprises to deploy the model locally, ensuring that sensitive data remains under domestic control (Yun, 2025; Sharma, 2025).


Detailed Comparison with ChatGPT:

Performance and Reasoning Clarity ChatGPT’s o1 model has been widely recognized for its robust reasoning capabilities; however, its closed-source nature limits transparency. In direct comparisons, DeepSeek R1 has shown parity—and in some cases superiority—with respect to reasoning clarity. Independent tests by developers indicate that R1’s intermediate reasoning steps are more comprehensible, facilitating easier debugging and iterative query refinement. For example, in complex multi-step problem-solving scenarios, R1 not only delivered correct solutions more rapidly but also provided detailed, human-like explanations of its thought process (Sayegh, 2025).

Cost Efficiency and Accessibility While premium access to ChatGPT’s capabilities can cost users upwards of $200 per month, DeepSeek R1 offers its advanced functionalities free of charge. This dramatic reduction in cost is achieved through efficient use of computational resources. DeepSeek reportedly trained R1 using only 2,048 Nvidia H800 GPUs at an estimated cost of $5.6 million—an expenditure that is a fraction of the resources typically required by U.S. competitors (Waters, 2025). Such cost efficiency democratizes access to high-performance AI, providing significant advantages for startups, academic institutions, and small businesses.


Detailed Comparison with Microsoft Azure AI:

Integration with Enterprise Platforms Microsoft has long been a leader in providing enterprise-grade AI solutions via Azure AI. Recently, Microsoft integrated DeepSeek R1 into its Azure AI Foundry, offering customers an additional open-source option that complements its proprietary models. This integration allows organizations to leverage R1’s powerful reasoning capabilities while enjoying the benefits of Azure’s robust security, compliance, and scalability. Unlike some closed-source models that require extensive licensing fees, R1’s open-access nature under Azure enables organizations to tailor the model to their specific needs, maintaining data sovereignty and reducing operational costs (Sharma, 2025).

Performance in Real-World Applications In practical applications, users on Azure have reported that DeepSeek R1 not only matches but sometimes exceeds the performance of traditional models in complex reasoning and mathematical problem-solving tasks. By deploying R1 locally via Azure, enterprises can ensure that sensitive computations are performed in-house, thereby addressing critical data privacy concerns. This localized approach is particularly valuable in regulated industries, where strict data governance is paramount (FT, 2025).


Market Reactions and Economic Implications:

Immediate Market Response and Stock Volatility The initial launch of DeepSeek R1 triggered a significant market reaction, most notably an 18% plunge in Nvidia’s stock as investors reassessed the cost structures underlying AI development. The disruption led to a combined market value wipeout of nearly $1 trillion across tech stocks, reflecting widespread concern over the implications of achieving top-tier AI performance with significantly lower computational expenditure (Waters, 2025).

Long-Term Investment Perspectives Despite the short-term volatility, many analysts view the current market corrections as a temporary disruption and a potential buying opportunity. The cost-efficient and open-source nature of R1 is expected to drive broader adoption of advanced AI technologies across various industries, ultimately spurring innovation and generating new revenue streams. Major U.S. technology firms, in response, are accelerating initiatives like the Stargate Project to bolster domestic AI infrastructure and maintain global competitiveness (FT, 2025).


Cybersecurity, Data Privacy, and Regulatory Reactions:

Governmental Bans and Regulatory Scrutiny DeepSeek’s practice of storing user data on servers in China and its adherence to local censorship policies have raised significant cybersecurity and privacy concerns. In response, U.S. lawmakers have proposed bipartisan legislation to ban DeepSeek’s software on government devices. Similar regulatory actions have been taken in Australia, South Korea, and Canada, reflecting a global trend of caution toward technologies with potential national security risks (Scroxton, 2025).

Security Vulnerabilities and Red-Teaming Results Independent cybersecurity tests have revealed that R1 is more prone to generating insecure code and harmful outputs compared to some Western models. These findings have prompted calls for more rigorous red-teaming and continuous monitoring to ensure that the model can be safely deployed at scale. The vulnerabilities underscore the necessity for both DeepSeek and its adopters to implement robust safety protocols to mitigate potential misuse (Agarwal, 2025).


Geopolitical and Strategic Implications:

Challenging U.S. AI Dominance DeepSeek R1’s emergence is a clear signal that high-performance AI can be developed without the massive resource investments traditionally associated with U.S. models. This development challenges the long-standing assumption of American technological supremacy and has prompted a strategic reevaluation among U.S. policymakers and industry leaders. In response, initiatives such as Microsoft’s Stargate Project are being accelerated to ensure that the U.S. maintains its competitive edge in the global AI arena (Karaian & Rennison, 2025).

Localized AI Ecosystems and Data Sovereignty To mitigate cybersecurity risks, several U.S. companies are now repackaging R1 for localized deployment. By ensuring that sensitive data remains on domestic servers, these firms are not only addressing privacy concerns but also paving the way for the creation of robust, localized AI ecosystems. This trend could ultimately reshape global data governance practices and alter the balance of technological power between the U.S. and China (von Werra, 2025).


Conclusion and Future Outlook:

DeepSeek R1 represents a watershed moment in the global AI race. Its technical innovations, cost efficiency, and open-source approach challenge entrenched assumptions about the necessity of massive compute power and proprietary control. In direct comparisons with systems like ChatGPT’s o1 and Microsoft’s Azure AI offerings, R1 demonstrates superior transparency and operational speed, while also offering unprecedented accessibility. Despite ongoing cybersecurity and regulatory challenges, the disruptive impact of R1 is catalyzing a broader realignment in AI development strategies. As both U.S. and Chinese technology ecosystems adapt to these shifts, the future of AI appears poised for a more democratized, competitively diverse, and strategically complex evolution.


About The Author:

Jeremy A. Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and seasoned senior management tech risk and digital strategy consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds a certificate in Media Technology from Oxford University’s Media Policy Summer Institute, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota’s Technological Leadership Institute, an MBA from Saint Mary’s University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale, and New Hope Community Police Academy (MN), and the Minneapolis FBI Citizens Academy. You can follow him on LinkedIn and Twitter.


References:

  1. Yun, C. (2025, January 30). DeepSeek-R1 models now available on AWS. Amazon Web Services Blog. Retrieved February 8, 2025, from https://aws.amazon.com/blogs/aws/deepseek-r1-models-now-available-on-aws/
  2. Sharma, A. (2025, January 29). DeepSeek R1 is now available on Azure AI Foundry and GitHub. Microsoft Azure Blog. Retrieved February 8, 2025, from https://azure.microsoft.com/en-us/blog/deepseek-r1-is-now-available-on-azure-ai-foundry-and-github/
  3. Waters, J. K. (2025, January 28). Nvidia plunges 18% and tech stocks slide as China’s DeepSeek spooks investors. Business Insider Markets. Retrieved February 8, 2025, from https://markets.businessinsider.com/news/stocks/nvidia-tech-stocks-deepseek-ai-race-nasdaq-2025-1
  4. Scroxton, A. (2025, February 7). US lawmakers move to ban DeepSeek AI tool. ComputerWeekly. Retrieved February 8, 2025, from https://www.computerweekly.com/news/366619153/US-lawmakers-move-to-ban-DeepSeek-AI-tool
  5. FT. (2025, January 28). The global AI race: Is China catching up to the US? Financial Times. Retrieved February 8, 2025, from https://www.ft.com/content/0e8d6f24-6d45-4de0-b209-8f2130341bae
  6. Agarwal, S. (2025, January 31). DeepSeek-R1 AI Model 11x more likely to generate harmful content, security research finds. Globe Newswire. Retrieved February 8, 2025, from https://www.globenewswire.com/news-release/2025/01/31/3018811/0/en/DeepSeek-R1-AI-Model-11x-More-Likely-to-Generate-Harmful-Content-Security-Research-Finds.html
  7. Karaian, J., & Rennison, J. (2025, January 28). The day DeepSeek turned tech and Wall Street upside down. The Wall Street Journal. Retrieved February 8, 2025, from https://www.wsj.com/finance/stocks/the-day-deepseek-turned-tech-and-wall-street-upside-down-f2a70b69
  8. von Werra, L. (2025, January 31). The race to reproduce DeepSeek’s market-breaking AI has begun. Business Insider. Retrieved February 8, 2025, from https://www.businessinsider.com/deepseek-r1-open-source-replicate-ai-west-china-hugging-face-2025-1
  9. Sayegh, E. (2025, January 27). DeepSeek is bad for Silicon Valley. But it might be great for you. Vox. Retrieved February 8, 2025, from https://www.vox.com/technology/397330/deepseek-openai-chatgpt-gemini-nvidia-china

AT&T Faces Massive Data Breach Impacting 73 Million and Negligence Lawsuits

Fig 1. AT&T Data Breach Infographic, WLBT3, 2024.

After weeks of denials, AT&T Inc. (NYSE:T), a leading player in the telecommunications sector, has recently unveiled a substantial data breach originating from 2021, leading to the compromise of sensitive information belonging to 73 million users [1]. This data breach has since surfaced on the dark web, exposing a trove of personal data including Social Security numbers, email addresses, phone numbers, and dates of birth, impacting both current and past account holders. The compromised information encompasses names, addresses, phone numbers, and for numerous individuals, highly sensitive data such as Social Security numbers, dates of birth, and AT&T passcodes.

How can you determine if you were impacted by the AT&T data breach? Firstly, ask yourself if you ever were a customer, and do not rely solely on AT&T to notify you. By utilizing services like Have I Been Pwned, you can ascertain if your data has been compromised. Additionally, Google’s Password Checkup tool can notify you if your account details are exposed, especially if you store password information in a Google account. For enhanced security, the premium edition of Bitwarden, a top-rated recommended password manager, offers the capability to scan for compromised passwords across the internet.

One prevalent issue concerning data breaches is the tendency for individuals to overlook safeguarding their data until it’s too late. It’s a common scenario – we often don’t anticipate our personal information falling into the hands of hackers who then sell it to malicious entities online. Regrettably, given the frequency and magnitude of cyber-attacks, the likelihood of your data being exposed has shifted from an “if” to a “when” scenario.

Given this reality, it’s imperative to adopt measures to safeguard your identity and data online, including [2]:

  1. Implementing multi-factor authentication – a crucial step in thwarting hackers’ attempts to infiltrate your accounts, even if your email address is publicly available.
  2. Avoiding password reuse and promptly changing passwords if they are compromised in a data breach – this practice ensures that even if your login credentials are exposed, hackers cannot infiltrate other accounts you utilize, including the one that has experienced a breach.
  3. Investing in identity protection services, either as standalone solutions or as part of comprehensive internet security suites – identity protection software can actively monitor the web for data breaches involving you, enabling you to take proactive measures to safeguard your identity.

AT&T defines a customer’s passcode as a numeric Personal Identification Number (PIN), typically consisting of four digits. Distinguishing it from a password, a passcode is necessary for finalizing an AT&T installation, conducting personal account activities over the phone, or reaching out to technical support, according to AT&T.

How to reset your AT&T passcode:

AT&T has taken steps to reset passcodes for active accounts affected by the data breach. However, as a precautionary measure, AT&T advises users who haven’t altered their passcodes within the last year to do so. Below are the steps to change your AT&T passcode:

  1. Navigate to your myAT&T Profile.
  2. Sign in when prompted. (If additional security measures are in place and sign-in isn’t possible, AT&T suggests opting for “Get a new passcode.”)
  3. Locate “My linked accounts” and select “Edit” for the passcode you wish to update.
  4. Follow the provided prompts to complete the process.

Here is AT&T’s official statement on the matter from 03/03/24 [3]:

“Based on our preliminary analysis, the data set appears to be from 2019 or earlier, impacting approximately 7.6 million current AT&T account holders and approximately 65.4 million former account holders. Currently, AT&T does not have evidence of unauthorized access to its systems resulting in exfiltration of the data set. The company is communicating proactively with those impacted and will be offering credit monitoring at our expense where applicable. We encourage current and former customers with questions to visit http://www.att.com/accountsafety for more information.”

The hackers behind this, allegedly ShiningHacker, endeavored to profit from the pilfered data by listing it for sale on the RaidForums data theft forum, initiating the bidding at $200,000 and entertaining additional offers in increments of $30,000 [4]. Moreover, they demonstrated readiness to promptly sell the data for $1 million, highlighting the gravity and boldness of the cyber offense.

Not surprisingly, AT&T is currently confronting numerous class-action lawsuits subsequent to the company’s acknowledgment of this data breach, which compromised the sensitive information of 73 million existing and former customers [5]. Among the ten lawsuits filed, one is being handled by Morgan & Morgan, representing plaintiff Patricia Dean and individuals in similar circumstances.

The lawsuit levels allegations of negligence, breach of implied contract, and unjust enrichment against AT&T, contending that the company’s deficient security measures and failure to promptly provide adequate notification about the data breach exposed customers to significant risks, including identity theft and various forms of fraud. It seeks compensatory damages, restitution, injunctive relief, enhancements to AT&T’s data security protocols, future audits, credit monitoring services funded by the company, and a trial by jury [6].


About the Author:

Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

References:


[1] AT&T. “AT&T Addresses Recent Data Set Released on the Dark Web.” 03/30/24: https://about.att.com/story/2024/addressing-data-set-released-on-dark-web.html

[2] Colby, Clifford, Combs, Mary-Elisabeth; “Data From 73 Million AT&T Accounts Stolen: How You Can Protect Yourself.” CNET. 04/02/24: https://www.cnet.com/tech/mobile/data-from-73-million-at-t-accounts-stolen-how-you-can-protect-yourself/

[3] AT&T. “AT&T Addresses Recent Data Set Released on the Dark Web.” 03/30/24: https://about.att.com/story/2024/addressing-data-set-released-on-dark-web.html

[4] Naysmith, Caleb. “73 Million AT&T Users’ Data Leaked As Hacker Said, ‘I Don’t Care If They Don’t Admit. I’m Just Selling’ Auctioned At Starting Price Of $200K”. https://finance.yahoo.com/news/73-million-t-users-data-173015617.html

[5] Kan, Michael. “AT&T Faces Class-Action Lawsuit Over Leak of Data on 73M Customers.” PC Mag. 04/02/24: https://www.pcmag.com/news/att-faces-class-action-lawsuit-over-leak-of-data-on-73m-customers

[6] Kan, Michael. “AT&T Faces Class-Action Lawsuit Over Leak of Data on 73M Customers.” PC Mag. 04/02/24: https://www.pcmag.com/news/att-faces-class-action-lawsuit-over-leak-of-data-on-73m-customers

Four Key Emerging Considerations with Artificial Intelligence (AI) in Cyber Security

#cryptonews #cyberrisk #techrisk #techinnovation #techyearinreview #infosec #musktwitter #disinformation #cio #ciso #cto #chatgpt #openai #airisk #iam #rbac #artificialintelligence #samaltman #aiethics #nistai #futurereadybusiness #futureofai

By Jeremy Swenson

Fig. 1. Zero Trust Components to Orchestration AI Mashup; Microsoft, 09/17/21; and Swenson, Jeremy, 03/29/24.

1. The Zero-Trust Security Model Becomes More Orchestrated via Artificial Intelligence (AI):

      The zero-trust model represents a paradigm shift in cybersecurity, advocating for the premise that no user or system, irrespective of their position within the corporate network, should be automatically trusted. This approach entails stringent enforcement of access controls and continual verification processes to validate the legitimacy of users and devices. By adopting a need-to-know-only access philosophy, often referred to as the principle of least privilege, organizations operate under the assumption of compromise, necessitating robust security measures at every level.

      Implementing a zero-trust framework involves a comprehensive overhaul of traditional security practices. It entails the adoption of single sign-on functionalities at the individual device level and the enhancement of multifactor authentication protocols. Additionally, it requires the implementation of advanced role-based access controls (RBAC), fortified network firewalls, and the formulation of refined need-to-know policies. Effective application whitelisting and blacklisting mechanisms, along with regular group membership reviews, play pivotal roles in bolstering security posture. Moreover, deploying state-of-the-art privileged access management (PAM) tools, such as CyberArk for password check out and vaulting, enables organizations to enhance toxic combination monitoring and reporting capabilities.

      App-to-app orchestration refers to the process of coordinating and managing interactions between different applications within a software ecosystem to achieve specific business objectives or workflows. It involves the seamless integration and synchronization of multiple applications to automate complex tasks or processes, facilitating efficient data flow and communication between them. Moreover, it aims to streamline and optimize various operational workflows by orchestrating interactions between disparate applications in a cohesive manner. This orchestration process typically involves defining the sequence of actions, dependencies, and data exchanges required to execute a particular task or workflow across multiple applications.

      However, while the concept of zero-trust offers a compelling vision for fortifying cybersecurity, its effective implementation relies on selecting and integrating the right technological components seamlessly within the existing infrastructure stack. This necessitates careful consideration to ensure that these components complement rather than undermine the orchestration of security measures. Nonetheless, there is optimism that the rapid development and deployment of AI-based custom middleware can mitigate potential complexities inherent in orchestrating zero-trust capabilities. Through automation and orchestration, these technologies aim to streamline security operations, ensuring that the pursuit of heightened security does not inadvertently introduce operational bottlenecks or obscure visibility through complexity.

      2. Artificial Intelligence (AI) Powered Threat Detection Has Improved Analytics:

      The utilization of artificial intelligence (AI) is on the rise to bolster threat detection capabilities. Through machine learning algorithms, extensive datasets are scrutinized to discern patterns suggestive of potential security risks. This facilitates swifter and more precise identification of malicious activities. Enhanced with refined machine learning algorithms, security information and event management (SIEM) systems are adept at pinpointing anomalies in network traffic, application logs, and data flow, thereby expediting the identification of potential security incidents for organizations.

      There will be reduced false positives which has been a sustained issue in the past with large overconfident companies repeatedly wasting millions of dollars per year fine tuning useless data security lakes that mostly produce garbage anomaly detection reports [1], [2]. Literally the kind good artificial intelligence (AI) laughs at – we are getting there. All the while, the technology vendors try to solve this via better SIEM functionality for an increased price at present. Yet we expect prices to drop really low as the automation matures.  

      With enhanced natural language processing (NLP) methodologies, artificial intelligence (AI) systems possess the capability to analyze unstructured data originating from various sources such as social media feeds, images, videos, and news articles. This proficiency enables organizations to compile valuable threat intelligence, staying abreast of indicators of compromise (IOCs) and emerging attack strategies. Notable vendors offering such services include Dark Trace, IBM, CrowdStrike, and numerous startups poised to enter the market. The landscape presents ample opportunities for innovation, necessitating the abandonment of past biases. Young, innovative minds well-versed in web 3.0 technologies hold significant value in this domain. Consequently, in the future, more companies are likely to opt for building their tailored threat detection tools, leveraging advancements in AI platform technology, rather than purchasing pre-existing solutions.

      3. Artificial Intelligence (AI) Driven Threat Response Ability Advances:

      Artificial intelligence (AI) isn’t just confined to threat detection; it’s increasingly playing a pivotal role in automating response actions within cybersecurity operations. This encompasses a range of tasks, including the automatic isolation of compromised systems, the blocking of malicious internet protocol (IP) addresses, the adjustment of firewall configurations, and the coordination of responses to cyber incidents—all achieved with greater efficiency and cost-effectiveness. By harnessing AI-driven algorithms, security orchestration, automation, and response (SOAR) platforms empower organizations to analyze and address security incidents swiftly and intelligently.

      SOAR platforms capitalize on AI capabilities to streamline incident response processes, enabling security teams to automate repetitive tasks and promptly react to evolving threats. These platforms leverage AI not only to detect anomalies but also to craft tailored responses, thereby enhancing the overall resilience of cybersecurity infrastructures. Leading examples of such platforms include Microsoft Sentinel, Rapid7 InsightConnect, and FortiSOAR, each exemplifying the fusion of AI-driven automation with comprehensive security orchestration capabilities.

      Microsoft Sentinel, for instance, utilizes AI algorithms to sift through vast volumes of security data, identifying potential threats and anomalies in real-time. It then orchestrates response actions, such as isolating compromised systems or blocking suspicious IP addresses, with precision and speed. Similarly, Rapid7 InsightConnect integrates AI-driven automation to streamline incident response workflows, enabling security teams to mitigate risks more effectively. FortiSOAR, on the other hand, offers a comprehensive suite of AI-powered tools for incident analysis, response automation, and threat intelligence correlation, empowering organizations to proactively defend against cyber threats. Basically, AI tools will help SOAR tools mature so security operations centers (SOCs) can catch the low hanging fruit; thus, they will have more time for analysis of more complex threats. These AI tools will employ the observe, orient, decide, act (OODA) Loop methodology [3]. This will allow them to stay up to date, customized, and informed of many zero-day exploits. At the same time, threat actors will constantly try to avert this with the same AI but with no governance.

        4. Artificial Intelligence (AI) Streamlines Cloud Security Posture Management (CSPM):

        With the escalating migration of organizations to cloud environments, safeguarding the security of cloud assets emerges as a paramount concern. While industry giants like Microsoft, Oracle, and Amazon Web Services (AWS) dominate this landscape with their comprehensive cloud offerings, numerous large organizations opt to establish and maintain their own cloud infrastructures to retain greater control over their data and operations. In response to the evolving security landscape, the adoption of cloud security posture management (CSPM) tools has become imperative for organizations seeking to effectively manage and fortify their cloud environments.

        CSPM tools play a pivotal role in enhancing the security posture of cloud infrastructures by facilitating continuous monitoring of configurations and swiftly identifying any misconfigurations that could potentially expose vulnerabilities. These tools operate by autonomously assessing cloud configurations against established security best practices, ensuring adherence to stringent compliance standards. Key facets of their functionality include the automatic identification of unnecessary open ports and the verification of proper encryption configurations, thereby mitigating the risk of unauthorized access and data breaches. “Keeping data safe in the cloud requires a layered defense that gives organizations clear visibility into the state of their data. This includes enabling organizations to monitor how each storage bucket is configured across all their storage services to ensure their data is not inadvertently exposed to unauthorized applications or users” [4]. This has considerations at both the cloud user and provider level especially considering artificial intelligence (AI) applications can be built and run inside the cloud for a variety of reasons. Importantly, these build designs often use approved plug ins from different vendors making it all the more complex.

        Furthermore, CSPM solutions enable organizations to proactively address security gaps and bolster their resilience against emerging threats in the dynamic cloud landscape. By providing real-time insights into the security status of cloud assets, these tools empower security teams to swiftly remediate vulnerabilities and enforce robust security controls. Additionally, CSPM platforms facilitate comprehensive compliance management by generating detailed reports and audit trails, facilitating adherence to regulatory requirements and industry standards.

        In essence, as organizations navigate the complexities of cloud adoption and seek to safeguard their digital assets, CSPM tools serve as indispensable allies in fortifying cloud security postures. By offering automated monitoring, proactive threat detection, and compliance management capabilities, these solutions empower organizations to embrace the transformative potential of cloud technologies while effectively mitigating associated security risks.

        About the Author:

        Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist / researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

        References:


        [1] Tobin, Donal; “What Challenges Are Hindering the Success of Your Data Lake Initiative?” Integrate.io. 10/05/22: https://www.integrate.io/blog/data-lake-initiative/

        [2] Chuvakin, Anton; “Why Your Security Data Lake Project Will … Well, Actually …” Medium. 10/22/22. https://medium.com/anton-on-security/why-your-security-data-lake-project-will-well-actually-78e0e360c292

        [3] Michael, Katina, Abbas, Roba, and Roussos, George; “AI in Cybersecurity: The Paradox.” IEEE Transactions on Technology and Society. Vol. 4, no. 2: pg. 104-109. 2023: https://ieeexplore.ieee.org/abstract/document/10153442

        [4] Rosencrance, Linda; “How to choose the best cloud security posture management tools.” CSO Online. 10/30/23: https://www.csoonline.com/article/657138/how-to-choose-the-best-cloud-security-posture-management-tools.html

        Top Pros and Cons of Disruptive Artificial Intelligence (AI) in InfoSec

        Fig. 1. Swenson, Jeremy, Stock; AI and InfoSec Trade-offs. 2024.

        Disruptive technology refers to innovations or advancements that significantly alter the existing market landscape by displacing established technologies, products, or services, often leading to the transformation of entire industries. These innovations introduce novel approaches, functionalities, or business models that challenge traditional practices, creating a substantial impact on how businesses operate (ChatGPT, 2024). Disruptive technologies typically emerge rapidly, offering unique solutions that are more efficient, cost-effective, or user-friendly than their predecessors.

        The disruptive nature of these technologies often leads to a shift in market dynamics, digital cameras or smartphones for example. These with new entrants or previously marginalized players gain prominence while established entities may face challenges in adapting to the transformative changes (ChatGPT, 2024). Examples of disruptive technologies include the advent of the internet, mobile technology, and artificial intelligence (AI), each reshaping industries and societal norms. Here are four of the leading AI tools:

        1.       OpenAI’s GPT:

        OpenAI’s GPT (Generative Pre-trained Transformer) models, including GPT-3 and GPT-2, are predecessors to ChatGPT. These models are known for their large-scale language understanding and generation capabilities. GPT-3, in particular, is one of the most advanced language models, featuring 175 billion parameters.

        2.       Microsoft’s DialoGPT:

        DialoGPT is a conversational AI model developed by Microsoft. It is an extension of the GPT architecture but fine-tuned specifically for engaging in multi-turn conversations. DialoGPT exhibits improved dialogue coherence and contextual understanding, making it a competitor in the chatbot space.

        3.       Facebook’s BlenderBot:

        BlenderBot is a conversational AI model developed by Facebook. It aims to address the challenges of maintaining coherent and contextually relevant conversations. BlenderBot is trained using a diverse range of conversations and exhibits improved performance in generating human-like responses in chat-based interactions.

        4.       Rasa:

        Rasa is an open-source conversational AI platform that focuses on building chatbots and voice assistants. Unlike some other models that are pre-trained on large datasets, Rasa allows developers to train models specific to their use cases and customize the behavior of the chatbot. It is known for its flexibility and control over the conversation flow.

        Here is a list of the pros and cons of AI-based infosec capabilities.

        Pros of AI in InfoSec:

        1. Improved Threat Detection:

        AI enables quicker and more accurate detection of cybersecurity threats by analyzing vast amounts of data in real-time and identifying patterns indicative of malicious activities. Security orchestration, automation, and response (SOAR) platforms leverage AI to analyze and respond to security incidents, allowing security teams to automate routine tasks and respond more rapidly to emerging threats. Microsoft Sentinel, Rapid7 InsightConnect, and FortiSOAR are just a few of the current examples

        2. Behavioral Analysis:

        AI can perform behavioral analysis to identify anomalies in user behavior or network activities, helping detect insider threats or sophisticated attacks that may go unnoticed by traditional security measures. Behavioral biometrics, such as analyzing typing patterns, mouse movements and ram usage, can add an extra layer of security by recognizing the unique behavior of legitimate users. Systems that use AI to analyze user behavior can detect and flag suspicious activity, such as an unauthorized user attempting to access an account or escalate a privilege.

        3. Enhanced Phishing Detection:

        AI algorithms can analyze email patterns and content to identify and block phishing attempts more effectively, reducing the likelihood of successful social engineering attacks.

        4. Automation of Routine Tasks:

        AI can automate repetitive and routine tasks, allowing cybersecurity professionals to focus on more complex issues. This helps enhance efficiency and reduces the risk of human error.

        5. Adaptive Defense Systems:

        AI-powered security systems can adapt to evolving threats by continuously learning and updating their defense mechanisms. This adaptability is crucial in the dynamic landscape of cybersecurity.

        6. Quick Response to Incidents:

        AI facilitates rapid response to security incidents by providing real-time analysis and alerts. This speed is essential in preventing or mitigating the impact of cyberattacks.

        Cons of AI in InfoSec:

        1. Sophistication of Attacks:

        As AI is integrated into cybersecurity defenses, attackers may also leverage AI to create more sophisticated and adaptive threats, leading to a continuous escalation in the complexity of cyberattacks.

        2. Ethical Concerns:

        The use of AI in cybersecurity raises ethical considerations, such as privacy issues, potential misuse of AI for surveillance, and the need for transparency in how AI systems operate.

        3. Cost and Resource Intensive:

        Implementing and maintaining AI-powered security systems can be resource-intensive, both in terms of financial investment and skilled personnel required for development, implementation, and ongoing management.

        4. False Positives and Negatives:

        AI systems are not infallible and may produce false positives (incorrectly flagging normal behavior as malicious) or false negatives (failing to detect actual threats). This poses challenges in maintaining a balance between security and user convenience.

        5. Lack of Human Understanding:

        AI lacks contextual understanding and human intuition, which may result in misinterpretation of certain situations or the inability to recognize subtle indicators of a potential threat. This is where QA and governance come in case something goes wrong.

        6. Dependency on Training Data:

        AI models rely on training data, and if the data used is biased or incomplete, it can lead to biased or inaccurate outcomes. Ensuring diverse and representative training data is crucial to the effectiveness of AI in InfoSec.

        About the author:

        Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist / researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

        The Importance of the 3-2-1 Back-Up Method

        #321backup #disasterrecovery #incidentmanagement #ransomeware #databreach #ciatriad

        Fig. 1. 3-2-1 Backup Infographic, Stock, 2023.

        Backing up data is one of the best things you can do to improve your response to ransomware, a data breach, an infrastructure failure, or another type of cyber-attack. Without a good comprehensive backup method that works and is tested, you likely will not be able to recover from where you left off thereby harming your business and customers.

        The 3-2-1 backup method requires saving multiple copies of data on different device types and in different locations. More specifically, the 3-2-1 method follows these three requirements:

        1. 3 Copies of Data: Have three copies of data—the original, and at least two copies.
        2. 2 Different Media Types: Use two different media types for storage. This can help reduce any impact that may impact one specific storage media type more than the other.
        3. 1 Copy Offsite: Keep one copy offsite to prevent the possibility of data loss due to a site-specific failure.

        Here are some pointers to make your backup more effective:

        1. Select the right data to back up: Critical data includes word processing documents, electronic spreadsheets, databases, financial files, human resources files, and accounts receivable/payable files. Not everything is worth backing up as it’s a waste of space. For example, data that is 8 years old with no business use is not worth backing up.
        2. Backup on a schedule: Backup data automatically on a repeatable schedule, if possible, bi-weekly, weekly, or even daily if needed. Pick a day or time range when the backup will run, say Thursdays at 10:00 p.m. CST (when most users are not working.
        3. Have backup test plans and follow them: Your backup plan must be written down in a clear and detailed way describing the backup process, roles, interconnections, and milestones which can gauge if it’s working, as well as the service time to recovery expected. Then of course test the backup at least every six months or after a key infrastructure change happens.
        4. Automate backups: Use software automation to execute the backups to save user time, and to reduce the risk of human error.

        About the Author:

        Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. Over 17 years he has held progressive roles at many banks, insurance companies, retailers, healthcare orgs, and even governments including being a member of the Federal Reserve Secure Payment Task Force. Organizations relish in his ability to bridge gaps and flesh out hidden risk management solutions while at the same time improving processes. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. As a futurist, his writings on digital currency, the Target data breach, and Google combining Google + video chat with Google Hangouts video chat have been validated by many. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire.

        Silicon Valley Bank Fails Due to Lack of Diversification, Weak Governance, and Hype – Creating a Bank Run

        Fig. 1. Silicon Valley Bank Cash Transfer Vehicle, Justin Sullivan, Getty Images, 2023.

        #svbfailure #svbbank #siliconvalleybank #cryptobank #venturetech #cryptofraud #bankgovernance #bankcomplaince #FDICSVB

        Silicon Valley Bank Federal Deposit Insurance Corporation (FDIC) OCC California Department of Financial Protection and Innovation

        The California Department of Financial Protection closed Silicon Valley Bank (SVB) on Fri 03/10/23 and the FDIC took control of and seized its deposits in the largest U.S. banking failure since the 2008 to 2012 mortgage financial crisis, and the second largest ever. Although SVB was well known in San Francisco and Boston where they had all of their 17 branches; they were little to known to the wider public. SVB specialized in financing start-ups and had become the 16th largest U.S. bank by assets. Their numbers at the end of 2022 were impressive with $209 billion in assets and approximately $175.4 billion in deposits.

        As a precursor to their failure, SVB recorded six straight quarterly losses as economic conditions turned unfavorable. Then on Mon 02/27/23 their CEO Greg Becker sold $3.6 million of stock in a pre-arraigned 10b5-1 plan designed to reduce conflict of interest, yet it’s still potentially questionable due to the gain he got and the odd timing weeks before their collapse. Yet other executives that sold in recent weeks may not have the protection of the 10b5-1 and that would be a worse example of conflict of interest. 

        Some degree of support is needed for SVB because most there are not to blame; but so too is criticism so that the financial system can get better and innovate in the free market. You cannot just blindly support people (mostly sr. mgmt.) and organizations (crypto tie in) who are largely responsible for startup failures, frozen loans and payrolls, huge job loss, loss of deposited money over 250k, and great economic downturn – all the while the SVB mgmt. team gets very rich.

        Obviously, the competencies and character of some of the SVB mgmt. team was not as good as other community banks and credit unions who aggressively avoided and overcame such failings. They likely put in more work with a deeper concern for the community, clients, and regulatory compliance – generally speaking. These many small community banks and credit unions are often 90 or 100 plus years old and did not grow at as fast a pace as SVB – super fast growth equals fast failure. Conversely, SVB is only 40 years young and most of its growth happened in the later part of that period. This coming from a guy who has consulted/worked at more than 10 financial institutions among other things including bank launch, tech risk, product, and compliance.

        The company’s downward spiral blew up by late Weds 03/08/23, when it surprised investors with news that it needed to raise $2.25 billion to strengthen its balance sheet. This was influenced significantly by the Fed rate increases which forced the bank to raise lending rates, and that in turn made it hard for startups and medium-sized businesses to find approved funding. SVB also locked too much of their capital away in low-interest bonds. To strengthen their balance sheet in a slightly silly and desperate move, SVB sold $21 billion in securities at a large $1.8 billion loss. The details, timing, and governance of this make little sense, since the bank knew regulators were already watching closely. As a result, their stock fell 60% Thurs to $106.04 following the restructuring news.

        As would be expected this fueled a higher level of deposit outflows from SVB; a $25 billion decline in deposits in the final three quarters of 2022. This spooked a lot of people, including CFOs, founders, VCs, and some unnamed tech celebrities — most of who started talking about the need to withdraw their money from SVB. SVB had almost 90% of its deposits uninsured by the FDIC which is far out of line with what traditional banks have. This is because the FDIC only covers deposits up to $250k. In contrast, Bank of America has about 32% of its deposits not insured by the FDIC – an enormous difference of 58%.

        Crypto firm Circle revealed in a tweet late Fri 03/10/23 that it held $3.3 billion with the bank. Roblox corp. held 5% of its $3 billion in cash ($150 million) at the bank. Video streamer Roku held an estimated $487 million at SVB, representing approximately 26% of the company’s cash and cash equivalents as of Fri. Crypto exchange platform BlockFi — who filed for bankruptcy in November — listed $227 million in uninsured holdings at the bank. Some other SVB customers included Ziprecruiter, Pinterest, Shopify, and CrowdStrike. VCs like Y. Combinator regularly referred startups to them.

        Yet after these initial outflows people start talking negatively, the perception became greater than reality. It did not matter whether the bank had a liquidity crisis or not. Heard psychology created a snowball effect in that no one wanted to be the last depositor at a bank — observing the lessons learned from prior banking mortgage crisis from 2008 to 2012 where Washington Mutual failed.

        In sum, customers withdrew a massive $42 billion of deposits by the end of Thurs 03/09/23, according to a California regulatory filing. As a result, SIVB stock continued to plummet down another 65% before premarket trading was halted early Fri by regulators.

        The FDIC described it this way in a press release:

        1. “All insured depositors will have full access to their insured deposits no later than Monday morning, March 13, 2023. The FDIC will pay uninsured depositors an advance dividend within the next week. Uninsured depositors will receive a receivership certificate for the remaining amount of their uninsured funds. As the FDIC sells the assets of Silicon Valley Bank, future dividend payments may be made to uninsured depositors.
        2. Silicon Valley Bank had 17 branches in California and Massachusetts. The main office and all branches of Silicon Valley Bank will reopen on Monday, March 13, 2023. The DINB will maintain Silicon Valley Bank’s normal business hours. Banking activities will resume no later than Monday, March 13, including on-line banking and other services. Silicon Valley Bank’s official checks will continue to clear. Under the Federal Deposit Insurance Act, the FDIC may create a DINB to ensure that customers have continued access to their insured funds.”

        That’s largely a bank run, and it is really bad news for SVB and many startups and medium businesses. SVB has been a foundational piece of the tech startup ecosystem. It was also known to industry commentators and tech risk researchers that SVB struggled with tech risk compliance, overall governance, and even had no chief risk officer in the eight months prior.

        With reasoning and no direct evidence, only circumstantial evidence — as I had a couple of interviews with them and was less than impressed with their competency and trajectory — I speculate that crypto ties were a significant negative factor here because many of the companies and tech sub-domains SVB served are entangled with crypto and crypto-related entitles. Examples of this include their dealings with Circle — it manages part of the USDC stablecoin reserve of the American Circle, which confirmed to have a little more than $3 billion dollars of reserve blocked with SVB.

        A Fri 03/10/23 Tweet from reporter Lauren Hirsch described BlockFi’s risky crypto entanglements with SVB this way: “Per new bankruptcy filing, BlockFi has $227m in Silicon Valley Bank. The bankruptcy trustee warned them on Mon that bc those funds are in a money market mutual fund, they’re not FDIC secured — which could be a prblm w/ keeping in compliance of bankruptcy law”.

        Crypto compliance and insight for a big bank is very complex, undefined, and risk prone. The biggest tech venture bank has to be involved with a few crypto related failings and controversies, and the above are just a few examples but I am sure there are more. I just don’t have the data to back that up now, but I am sure it’s being investigated and/or litigated.

        Note * This is a complex, evolving, and new development — some info may be incomplete and/or out of date at the time you view this.

        About the Author:

        Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. Over 17 years he has held progressive roles at many banks, insurance companies, retailers, healthcare orgs, and even governments including being a member of the Federal Reserve Secure Payment Task Force. Organizations relish in his ability to bridge gaps and flesh out hidden risk management solutions while at the same time improving processes. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. As a futurist, his writings on digital currency, the Target data breach, and Google combining Google + video chat with Google Hangouts video chat have been validated by many. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire.

        Five Cyber-Tech Trends of 2021 and What it Means for 2022.

        Minneapolis 01/08/22

        By Jeremy Swenson

        Intro:

        Every year I like to research and commentate on the most impactful security technology and business happenings from the prior year. This year is unique since the pandemic and mass resignation/gig economy continues to be a large part of the catalyst for most of these trends. All these trends are likely to significantly impact small businesses, government, education, high tech, and large enterprise in big and small ways.

        Fig. 1. Facebook Whistle Blower and Disinformation Mashup (Getty & Stock Mashup, 2021).

        Summary:

        The pandemic continues to be a big part of the catalyst for digital transformation in tech automation, identity and access management (IAM), big data, collaboration tools, artificial intelligence (AI), and increasingly the supply chain. Disinformation efforts morphed and grew last year challenging data and culture. This requires us to put more attention on knowing and monitoring our own social media baselines. We no longer have the same office due to mass work from home (WFH) and the mass resignation/gig economy. This infers increased automated zero-trust policies and tools for IAM with less physical badge access required. The security perimeter is now more defined by data analytics than physical/digital boundaries.

        The importance of supply chain cyber security was elevated by the Biden Administration’s Executive Order 1407 in response to hacks including SolarWinds and Colonial Pipeline. Education and awareness around the review and removal of non-essential mobile apps grows as a top priority as mobile apps multiply. All the while, data breaches, and ransomware reach an all-time high while costing more to mitigate.

        1) Disinformation Efforts Accelerate Challenging Data and Culture:

        Disinformation has not slowed down any in 2021 due to sustained advancements in communications technologies, the growth of large social media networks, and the “appification” of everything thereby increasing the ease and capability of disinformation. Disinformation is defined as incorrect information intended to mislead or disrupt, especially propaganda issued by a government organization to a rival power or the media. For example, governments creating digital hate mobs to smear key activists or journalists, suppress dissent, undermine political opponents, spread lies, and control public opinion (Shelly Banjo; Bloomberg, 05/18/2019).

        Today’s disinformation war is largely digital via platforms like Facebook, Twitter, Instagram, Reddit, WhatsApp, Yelp, Tik-tok, SMS text messages, and many other lesser-known apps. Yet even state-sponsored and private news organizations are increasingly the weapon of choice, creating a false sense of validity. Undeniably, the battlefield is wherever many followers reside. 

        Bots and botnets are often behind the spread of disinformation, complicating efforts to trace and stop it. Further complicating this phenomenon is the number of app-to-app permissions. For example, the CNN and Twitter apps having permission to post to Facebook and then Facebook having permission to post to WordPress and then WordPress posting to Reddit, or any combination like this. Not only does this make it hard to identify the chain of custody and original source, but it also weakens privacy and security due to the many authentication permissions involved. The copied data is duplicated at each of these layers which is an additional consideration.

        We all know that false news spreads faster than real news most of the time, largely because it is sensationalized. Since most disinformation draws in viewers which drives clicks and ad revenues; it is a money-making machine. If you can significantly control what’s trending in the news and/or social media, it impacts how many people will believe it. This in turn impacts how many people will act on that belief, good or bad. This is exacerbated when combined with human bias or irrational emotion. For example, in late 2021 there were many cases of fake COVID-19 vaccines being offered in response to human fear (FDA; 09/28/2021). This negatively impacts culture by setting a misguided example of what is acceptable.

        There were several widely reported cases of political disinformation in 2021 including misleading texts, e-mails, mailers, Facebook censorship, and robocalls designed to confuse American voters amid the already stressful pandemic. Like a narcissist’s triangulation trap, these disinformation bursts riled political opponents on both sides in all states creating miscommunication, ad hominin attacks, and even derailed careers with impacts into the future (PBS; The Hinkley Report, 11/24/20 and Daniel Funke; USA Today, 12/23/21).

        Facebook is significantly involved in disinformation as one recent study stated, “Globally, Facebook made the wrong decision for 83 percent of those ads that had not been declared as political by their advertisers and that Facebook or the researchers deemed political. Facebook both overcounted and undercounted political ads in this group” (New York University; Cybersecurity For Democracy, 2021). Of course, Facebook disinformation whistleblower Frances Haugen who testified before Congress in 2021 is only more evidence of these and related Facebook failings. Specifically that “Facebook executives, including CEO Mark Zuckerberg, misstated and omitted key details about what was known about Facebook and Instagram’s ability to cause harm” (Bobby Allyn; NPR, 10/05/21).

        Fig. 2. Facebook Gaps in Ad Transparency (IMEC-DistriNet KU Leuven and NYU Cyber Security for Democracy, 2021).

        With the help of Facebook’s misinformation, huge swaths of confused voters and activists aligned more with speculation and emotion/hype than unbiased facts, and/or project themselves as fake commentators. This dirtied the data in terms of the election process and only begs the question – which parts of the election information process are broken? This normalizes petty policy fights, emotional reasoning, lack of unbiased intellectualism – negatively impacting western culture. All to the threat actor’s delight. Increased public to private partnerships, more educational rigor, and enhanced privacy protections for election and voter data are needed to combat this disinformation.

        2) Identity and Access Management (IAM) Scrutiny Drives Zero Trust Orchestration:

        The pandemic and mass resignation/gig economy has pushed most organizations to amass work from home (WFH) posture. Generally, this improves productivity making it likely to become the new norm. Albeit with new rules and controls. To support this, 51% of business leaders started speeding up the deployment of zero trust capabilities in 2020 (Andrew Conway; Microsoft, 08/19/20) and there is no evidence to suggest this is slowing down in the next year but rather it is likely increasing to support zero trust orchestration. Orchestration is enhanced automation between partner zero trust applications and data, while leaving next to no blind spots. This reduces risk and increases visibility and infrastructure control in an agile way. The quantified benefit of deploying mature zero trust capabilities including orchestration is on average $ 1.76 million dollars less in breach response costs when compared to an organization who has not rolled out zero trust capabilities (IBM Security, Cost of A Data Breach Report, 2021). 

        Fig. 3. Zero Trust Components to Orchestration (Microsoft, 09/17/21).

        Zero trust moves organizations to a need-to-know-only access mindset with inherent deny rules, all the while assuming you are compromised. This infers single sign-on at the personal device level and improved multifactor authentication. It also infers better role-based access controls (RBAC), firewalled networks, improved need-to-know policies, effective whitelisting and blacking listing of apps, group membership reviews, and state of the art PAM (privileged access management) tools for the next year. In the future more of this is likely to better automate and orchestrate (Fig. 3.) zero trust abilities so that one part does not hinder another part via complexity fog.

        3) Security Perimeter is Now More Defined by Data Analytics than Physical/Digital Boundaries:

        This increased WFH posture blurs the security perimeter physically and digitally. New IP addresses, internet volume, routing, geolocation, and virtual machines (VMs) exacerbate this blur. This raises the criticality of good data analytics and dashboarding to define the digital boundaries in real-time. Therefore, prior audits, security controls, and policies may be ineffective. For instance, empty corporate offices are the physical byproduct of mass WFH, requiring organizations to set default disable for badge access. Extra security in or near server rooms is also required. The pandemic has also made vendor interactions more digital, so digital vendor connection points should be reduced and monitored in real-time, and the related exception policies should be re-evaluated.

        New data lakes and machine learning informed patterns can better define security perimeter baselines. One example of this includes knowing what percent of your remote workforce is on what internet providers and what type? For example, Google fiber, Comcast cable, CenturyLink DSL, ATT 5G, etc. There are only certain modems that can go with each of these networks and that leaves a data trail. Of course, it could be any type of router. What type of device do they connect with MAC, Apple, VM, or other, and if it is healthy can all be determined in relationship to security perimeter analytics.

        4) Supply Chain Risk and Attacks Increase Prompting Government Action:

        Every organization has a supply chain big or small. There are even subcomponents of the supply chain that can be hard to see like third/fourth-party vendors. A supply chain attack works by targeting a third/fourth party with access to an organization’s systems instead of hacking their networks directly.

        In 2021 cybercriminals focused their surveillance on key components of the supply chain including hacking DNS servers, switches, routers, VPN concentrators and services, and other supply chain connected components at the vendor level. Of note was the massive Colonial Gas Pipeline hack that spiked fuel prices this last summer. This was caused by one compromised VPN account informed by a leaked password from the dark web (Turton, William; and Mehrotra, Kartikay; Bloomberg, 06/04/21). The SolarWinds hack was another supply chain-originated attack in that they got into SolarWinds IT management product Orien which in turn got them into the networks of most of the customers of that product (Lily Hay Newman; Wired, 12/19/21). The research consensus unsurprisingly ties this attack to Russian affiliated threat actors and there is no evidence contracting that.

        In response to these and related attacks the U.S. Presidential Administration issued Executive Order 14017, the heart of which requires those who manufacture and distribute software a new awareness of their supply chain to include what is in their products, even open-source software (White House; 05/12/21). This in addition to more spending on CISA hiring and public relations efforts for vulnerabilities and NIST framework conformance. Time will tell what this order delivers as it is dependent on what private sector players do.

        Fig. 4. Supply Chain Cyber Attack Diagram (INSURETrust, 2021).

        5) Data Breaches Have Greatly Increased in Number and Cost:

        The pandemic has continued to be a part of the catalyst for increased lawlessness including fraud, ransomware, data theft, and other types of profitable hacking. Cybercriminals are more aggressively taking advantage of geopolitical conflict and legal standing gaps. For example, almost all hacking operations are in countries that do not have friendly geopolitical relations with the United States or its allies – and all their many proxy hops would stay consistent with this. These proxy hops are how they hide their true location and identity.

        Moreover, with local police departments extremely overworked and understaffed with their number one priority being responding to the huge uptick in violent crime in most major cities, white-collar cybercrimes remain a low priority. Additionally, local police departments have few cyber response capabilities depending on the size of their precinct. Often, they must sheepishly defer to the FBI, CISA, and the Secret Service, or their delegates for help. Yet not unsurprisingly, there is a backlog for that as well with preference going to large companies of national concern that fall clearly into one of the 16 critical infrastructures. That is if turf fights and bureaucratic roadblocks don’t make things worse. Thus, many mid and small-sized businesses are left in the cold to fend for themselves which often results in them paying ransomware, and then being a victim a second time all the while their insurance carrier drops them.

        Further complicating this is lack of clarity on data breach and business interruption insurance coverage and terms. Keep in mind most general business liability insurance policies and terms were drafted before hacking was invented so they are by default behind the technology. Most often general liability business insurance covers bodily injuries and property damage resulting from your products, services, or operations. Please see my related article 10 Things IT Executives Must Know About Cyber Insurance to understand incident response and to reduce the risk of inadequate coverage and/or claims denials.

        According to the Identity Theft Resource Center (ITRC)’s 2021Q3 Data Breach Report, there was a 17% year-over increase as of 09/30/21. This means that by the time they finish their Q4 2021 report it’s likely to be above a 30% year-over-year increase. Breaches are also more costly for organizations suffering them according to the IBM Security Cost of Data Breach Report (Fig 5).

        Fig 5. Cost of A Data Breach Increases 2020 to 2021 (IBM Security, 2021).

        From 2020 to 2021 the average cost of a data breach in U.S. dollars rose to $4.24 million from $3.86 million. This is almost a 10% increase at 9.1%. In contrast, the preceding 4 years were relatively flat (Fig 5). The pandemic and policing conundrum is a considerable part of this uptick.

        Lastly, this is a lot of money for an organization to spend on a breach. Yet this amount could be higher when you factor in other long-term consequence costs such as increased risk of a second breach, brand damage, and/or delayed regulatory penalties that were below the surface – all of which differs by industry. In sum, it is cheaper and more risk prudent to spend even $4.24 million or a relative percentage at your organization on preventative zero trust capabilities than to deal with the cluster of a data breach.

        Take-Aways:

        COVID-19 remains a catalyst for digital transformation in tech automation, IAM, big data, collaboration tools, and AI. We no longer have the same office and thus less badge access is needed. The growth and acceptability of mass WFH combined with the mass resignation/gig economy remind employers that great pay and culture alone are not enough to keep top talent. Signing bonuses and personalized treatment are likely needed. Single sign-on (SSO) will expand to personal devices and smartphones/watches. Geolocation-based authentication is here to stay with double biometrics likely. The security perimeter is now more defined by data analytics than physical/digital boundaries, and we should dashboard this with machine learning and AI tools.

        Education and awareness around the review and removal of non-essential mobile apps is a top priority. Especially for mobile devices used separately or jointly for work purposes. This requires a better understanding of geolocation, QR code scanning, couponing, digital signage, in-text ads, micropayments, Bluetooth, geofencing, e-readers, HTML5, etc. A bring your own device (BYOD) policy needs to be written, followed, and updated often informed by need-to-know and role-based access (RBAC) principles. Organizations should consider forming a mobile ecosystem security committee to make sure this unique risk is not overlooked or overly merged with traditional web/IT risk. Mapping the mobile ecosystem components in detail is a must.

        IT and security professionals need to realize that alleviating disinformation is about security before politics. We should not be afraid to talk about it because if we are then our organizations will stay weak and insecure and we will be plied by the same political bias that we fear confronting. As security professionals, we are patriots and defenders of wherever we live and work. We need to know what our social media baseline is across platforms. More social media training is needed as many security professionals still think it is mostly an external marketing thing. Public-to-private partnerships need to improve and app to app permissions need to be scrutinized. Enhanced privacy protections for election and voter data are needed. Everyone does not need to be a journalist, but everyone can have the common sense to identify malware-inspired fake news. We must report undue bias in big tech from an IT, compliance, media, and a security perspective.

        Cloud infra will continue to grow fast creating perimeter and compliance complexity/fog. Organizations should preconfigure cloud-scale options and spend more on cloud-trained staff. They should also make sure that they are selecting more than two or three cloud providers, all separate from one another. This helps staff get cross-trained on different cloud platforms and add-ons. It also mitigates risk and makes vendors bid more competitively. 

        The increase in number and cost of data breaches was in part attributed to vulnerabilities in supply chains in a few national data breach incidents in 2021. Part of this was addressed in President Biden’s Executive Order 1407 on supply chain security. This reminds us to replace outdated routers, switches, repeaters, controllers, and to patch them immediately. It also reminds us to separate and limit network vendor access points to strictly what is needed and for a limited time window. Last but not least, we must have up-to-date thorough business interruption / cyber insurance with detailed knowledge of what it requires for incident response with breach vendors pre-selected.  

        About the Author:

        Jeremy Swenson is a disruptive thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. Over 17 years he has held progressive roles at many banks, insurance companies, retailers, healthcare orgs, and even governments including being a member of the Federal Reserve Secure Payment Task Force. Organizations relish in his ability to bridge gaps and flesh out hidden risk management solutions while at the same time improving processes. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. As a futurist, his writings on digital currency, the Target data breach, and Google combining Google + video chat with Google Hangouts video chat have been validated by many. He holds an MBA from St. Mary’s University of MN, a MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire.

        Seven Impactful Cyber-Tech Trends of 2020 and What it Means for 2021.

        Every year I like to research and commentate on the most impactful security technology and business happenings from the prior year. This year is unique since the pandemic is partly the catalyst for most of these trends in conjunction with it being a presidential election year like no other. All these trends are likely to significantly impact small businesses, government, education, high tech, and large enterprise in big and small ways.

        Fig 1. Stock Mashup, 2020.

        1) Disinformation Efforts Accelerate Challenging Data and Culture:

        Advancements in communications technologies, the growth of large social media networks, and the “appification” of everything increases the ease and capability of disinformation. Disinformation is defined as incorrect information intended to mislead or disrupt, especially propaganda issued by a government organization to a rival power or the media. For example, governments creating digital hate mobs to smear key activists or journalists, suppress dissent, undermine political opponents, spread lies, and control public opinion (Shelly Banjo, Bloomberg, 05/18/2019). Today’s disinformation war is largely digital via platforms like Facebook, Twitter, iTunes, WhatsApp, Yelp, and Instagram. Yet even state-sponsored and private news organizations are increasingly the weapon of choice creating a false sense of validity. Undeniably, the battlefield is wherever many followers reside. 

        Bots and botnets are often behind the spread of disinformation, complicating efforts to trace it and to stop it. Further complicating this phenomenon is the number of app-to-app permissions. For example, the CNN and Twitter apps having permission to post to Facebook and then Facebook having permission to post to WordPress and then WordPress posting on Reddit, or any combination like this. Not only does this make it hard to identify the chain of custody and source, but it also weakens privacy and security due to the many authentication permissions. 

        We all know that false news spreads faster than real news most of the time, largely because it is sensationalized. Since disinformation draws in viewers, which drives clicks and ad revenues – it is a money-making machine. If you can control what’s trending in the news and/or social media, it impacts how many people will believe it. This in turn impacts how many people will act on that belief, good or bad. This is exacerbated when combined with human bias or irrational emotion. For example, in late 2020 there were many cases of fake COVID-19 vaccines being offered in response to human fear (FDA, 12/22/2020). This negatively impacts culture by setting a misguided example of what is acceptable.

        There were several widely reported cases of political disinformation in 2020 including misleading texts, e-mails, mailers, and robocalls designed to confuse American voters amid the already stressful pandemic. Like a narcissist’s triangulation trap these disinformation bursts riled political opponents on both sides in all states creating miscommunication, ad hominin attacks, and even derailed careers (PBS, The Hinkley Report, 11/24/20). Moreover, huge swaths of confused voters aligned more with speculation and emotion/hype than unbiased facts. This dirtied the data in terms of the election process and only begs the question of which parts of the election information process are broken. This normalizes petty policy fights, emotional reasoning, lack of unbiased intellectualism – negatively impacting western culture. All to the threat actor’s delight. Increased public to private partnerships, more educational rigor, and enhanced privacy protections for election and voter data are needed to combat this disinformation.

        2) Stalkerware Grows and Evolves Reducing Mobile Privacy:

        The increased use of mobile devices in conjunction with the pandemic induced work from home (WFH) growth has produced more stalkerware. According to one report, there was a 51% increase in Android spyware and stalkerware from March through June, vs the first two months of the year (Avast, Security Boulevard, 12/02/20); and this is likely to be above a 100% increase when all data is tabulated for the end of 2020. Inspired by covert law enforcement investigation tactics, this malware variant can be secretly installed on a victim’s phone hiding as a seemingly harmless app. It is not that different from employee monitoring software. However, unlike employee monitoring software, which can easily be confused with this malware; stalkerware is typically installed by fake friends, jealous spouses and partners, ex-partners, and even concerned relatives. If successfully installed, it relays private information back to the attacker including the victim’s photos, location, texts, web browsing history, call records and more. This is where the privacy violation and abuse and/or fraud can start yet it is hard to identify in the blur of too many mobile apps.

        3) Identity & Access Management (IAM) Scrutiny Drives Zero Trust:

        The pandemic has pushed most organizations to amass WFH posture. Generally, this improves productivity making it likely to become the new norm, albeit with new rules and controls. To support this, 51% of business leaders are speeding up the deployment of Zero Trust capabilities (Andrew Conway, Microsoft, 08/19/20). Zero trust moves organizations to a need to know only access mindset with inherent deny rules, all the while assuming you are compromised. This infers single sign-on at the personal device level and improved multifactor authentication. It also infers better role-based access controls (RBAC), improved need to know policies, group membership reviews, and state of the art PAM tools for the next year.

        4) Security Perimeter is Now More Defined by Data Analytics than Physical/Digital Boundaries:

        This increased WFH posture blurs the security perimeter both physically and digitally. New IP addresses, internet volume, routing, geolocation, and virtual machines (VMs) exacerbate this blur. This raises the criticality of good data analytics and dashboarding to define the digital boundaries in real-time. Therefore, prior audits, security controls, and policies may be ineffective. For instance, empty corporate offices are the physical byproduct of mass WFH, requiring organizations to set default disable for badge access. Extra security in or near server rooms is also required. The pandemic has also made vendor interactions more digital, so digital vendor connection points should be reduced and monitored in real-time, and the related exception policies should be revaluated.

        5) Data Governance Gets Sloppy Amid Agility:

        Mass WFH has increased agility and driven sloppy data governance. For example, one week after the CARES Act was passed banks were asked to accept Paycheck Protection Program (PPP) loan applications. Many banks were unprepared to deal with the flood of data from digital applications, financial histories, and related docs, and were not able to process them in an efficient way. Moreover, the easing of regulatory red tape at hospitals/clinics, although well-intentioned to make emergency response faster. It created sloppy data governance, as well. The irony of this is that regulators are unlikely to give either of these industries a break, nor will civil attorneys hungry for any hangnail claim.

        6) The Divide Between Good and Bad Cloud Security Grows:

        The pandemic has reminded us that there are two camps with cloud security. Those who have a planned option for bigger cloud-scale and those that are burning their feet in a hasty rush to get there. In the first option, the infrastructure is preconfigured and hardened, rates are locked, and there is less complexity, all of which improves compliance and gives tech risk leaders more peace of mind. In the latter, the infrastructure is less clear, rates are not predetermined, compliance and integration are confusing at best, and costs run high – all of which could set such poorly configured cloud infrastructures up for future disasters.

        7) Phishing Attacks Grow Exponentially and Get Craftier:

        The pandemic has caused a hurricane of phishing emails that have been hard to keep up with. According to KnowBe4 and Security Magazine, there has been a 6,000% increase in phishing e-mails since the start of the pandemic (Stu Sjouwerman, KnowBe4, 07/13/20 & Security Magazine, 07/22/20). Many of these e-mails have improved their approach and design, appearing more professional and appealing to our emotions by using tags concerning COVID relief, data, and vaccines. Ransomware increased 72% year over year (Security Magazine, 07/22/20). With many new complexities in the mobile ecosystem and exponential app growth, it is not surprising that mobile vulnerabilities also increased by 50% (Security Magazine, 07/22/20).

        Take-Aways:

        COVID-19 is the catalyst for digital transformation in tech automation, IAM, big data, collaboration tools, and AI. We no longer have the same office and thus less badge access is needed. Single sign-on (SSO) will expand to personal devices and smartphones/watches. Geolocation based authentication is here to stay with double biometrics likely. The security perimeter is now more defined by data analytics than physical/digital boundaries, and we should to dashboard this with machine learning and AI tools.

        Education and awareness around the review and removal of non-essential mobile apps is a top priority. Especially for mobile devices used separately or jointly for work purposes. This requires a better understanding of geolocation, QR code scanning, couponing, digital signage, in-text ads, micropayments, Bluetooth, geofencing, e-readers, HTML5, etc. A bring your own device (BYOD) policy needs to be written, followed and updated often – embracing need to know and role-based access (RBAC) principles. Organizations should consider forming a mobile ecosystem security committee to make sure this unique risk is not overlooked or overly merged with traditional web/IT risk. Mapping the mobile ecosystem components in detail is a must.

        Cloud infra will continue to grow fast creating perimeter and compliance complexity/fog. Organizations should preconfigure cloud scale options and spend more on cloud trained staff. They should also make sure that they are selecting more than two or three cloud providers, all separate from one another. This helps staff get cross-trained on different cloud platforms and add-ons. It also mitigates risk and makes vendors bid more competitively.  IT and security professionals need to realize that alleviating disinformation is about security before politics. We should not be afraid to talk about it because if we are then our organizations will stay weak and insecure and we will be plied by the same political bias that we fear confronting. As security professionals, we are patriots and defenders of wherever we live and work. We need to know what our social media baseline is across platforms. More social media training is needed as many security professionals still think it is mostly an external marketing thing. Public-to-private partnerships need to improve and app to app permissions need to be scrutinized. Enhanced privacy protections for election and voter data are needed. Everyone does not need to be a journalist, but everyone can have the common sense to identify malware inspired fake news. We must report undue bias in big tech from an IT, compliance, media, and a security perspective.

        About the Author:

        Jeremy Swenson is a disruptive thinking security entrepreneur and senior management tech risk consultant. Over 15 years he has held progressive roles at many banks, insurance companies, retailers, healthcare orgs, and even governments. Organizations relish in his ability to bridge gaps and flesh out hidden risk management solutions while at the same time improving processes. He is also a frequent speaker, published writer, and even does some pro bono consulting in these areas. He holds an MBA from St Mary’s University of MN and MSST (Master of Science in Security Technologies) degree from the University of Minnesota.

        Abstract Forward Podcast #10: CISO Risk Management and Threat Modeling Best Practices with Donald Malloy and Nathaniel Engelsen!

        Fig. 1. Joe the IT Guy, 10/17/2018

        Featuring the esteemed technology and risk thought leaders Donald Malloy and Nathaniel Engelsen — this episode covers threat modeling methodologies STRIDE, Attack Tree, VAST, and PASTA. Specifically, how to apply them with limited budgets. It also discusses the complex intersection of how to derive ROI on threat modeling with compliance and insurance considerations. We then cover IAM best practices including group and role level policy and control best practices. Lastly, we hear a few great examples of key CISO risk management must-dos at the big and small company levels.

        Fig. 2. Pasta Threat Modeling Steps (Nataliya Shevchenko, CMU, 12/03/2018).

        Donald Malloy has more than 25 years of experience in the security and payment industry and is currently a security technology consultant advising many companies. Malloy was responsible for developing the online authentication product line while at NagraID Security (Oberthur) and prior to that he was Business Development and Marketing Manager for Secure Smart Card ICs for both Philips Semiconductors (NXP) and Infineon Technologies. Malloy originally comes from Boston where he was educated and has M.S. level degrees in Organic Chemistry and an M.B.A. in Marketing. Presently he is the Chairman of The Initiative for Open Authentication (OATH) and is a solution provider with DualAuth. OATH is an industry alliance that has changed the authentication market from proprietary systems to an open-source standard-based architecture promoting ubiquitous strong authentication used by most companies today. DualAuth is a global leader in trusted security with two-factor authentication include auto passwords. He resides in southern California and in his spare time he enjoys hiking, kayaking, and traveling around this beautiful world.

        Nathaniel Engelsen is a technology executive, agilest, writer, and speaker on topics including DevOps, agile team transformation, and cloud infrastructure & security. Over the past 20 years he has worked for startups, small and mid-size organizations, and $1B+ enterprises in industries as varied as consulting, gaming, healthcare, retail, transportation logistics, and digital marketing. Nathaniel’s current security venture is Callback Security, providing dynamic access control mechanisms that allow companies to turn off well-known or static remote and database access routes. Nathaniel has a bachelor’s in Management Information Systems from Rowan University and an MBA from the University of Minnesota, where he was a Carlson Scholar. He also holds a CISSP.

        The podcast can be heard here.

        More information on Abstract Forward Consulting can be found here.

        Disclaimer: This podcast does not represent the views of former or current employers and/or clients. This podcast will make every reasonable effort to verify facts and inferences therefrom. However, this podcast is intended to entertain and significantly inform its audience based on subjective reason-based opinions. Non-public information will not be disclosed. Information obtained in this podcast may be materially out of date at or after the time of the podcast. This podcast is not legal, accounting, audit, health, technical, or financial advice. © Abstract Forward Consulting, LLC.

        8 Effective Third-Party Risk Management Tactics

        In this increasingly complex security landscape with threat actors and vendors changing their tools rapidly, managing third-party risk is very difficult, ambiguous, and it’s even more difficult to know how to prioritize mitigation spend.

        Fig 1. Risk, Stock Image, 2019.

        The key to any vendor risk management program or framework is measurement, repeatability, and learning or improving from what was repeated as the business and risks change. These are the nine best practices you can follow to help assess your vendors’ security processes and their willingness to understand your risks and collectively mitigate both of them.

        1) Identify All Your Vendors / Business Associates:

        Many companies miss this easy step. Use RBAC (role-based access controls) when applicable – windows groups or the like. Creating a repeatable, written, compliance process for identifying them and making updates to the list as vendors move in and out of the company is worthwhile.

        2) Ensure Your Vendors Perform Regular Security Assessments:

        Risk assessments should be conducted on a weekly, monthly, or quarterly basis and reviewed and updated in response to changes in technology and the operating environment.

        At a minimum, security risk assessments should include:

        a) Evaluate the likelihood and potential impact of risks to in-scope assets.

        b) Institute measures to protect against those risks.

        c) Documentation of the security measures taken.

        Vendors must also regularly review the findings of risk assessments to determine the likelihood and impact of the risk that they identify, as well as remediate any deficiencies.|

        Fig. 2. Stock Image, Third-Party Risk Mgmt Inputs, 2019.

        3) Make Sure Vendors Have Written Information Security Policies / Procedures:

        a) Written security policies and procedures should clearly outline the steps and tasks needed to ensure compliance delivers the expected outcomes.

        b) Without a reference point, policies and procedures can become open to individual interpretation, leading to misalignment and mistakes. Verify not only that companies have these written policies, but that they align with your organization’s standards. Ask other peers in your industry for a benchmark.

         4) Prioritize Vendors Based on Risk – Use Evidence and Input from Others – NOT Speculation:

        a) Critical Risk: Vendors who are critical to your operation, and whose failure or inability to deliver contracted services could result in your organization’s failure.

        b) High Risk: Vendors (1) who have access to customer data and have a high risk of information loss; and / or (2) upon whom your organization is highly dependent operationally.

        c) Medium Risk: Vendors (1) whose access to customer information is limited; and / or whose loss of services would be disruptive to your organization.

        d) Low Risk: Vendors who do not have access to customer data and whose loss of services would not be disruptive to your organization.

        5) Verify That Vendors Encrypt Data in All Applicable Places – At Rest, In Transit, etc:

        a) Encryption, a process that protects data by making it unreadable without the use of a key or password, is one of the easiest methods of protecting data against theft.

        b) When a vendor tells you their data is encrypted, trust but verify. Delve deeper and ask for details about different in-transit scenarios, such as encryption of backup and what type of backup. Ask them about what type of encryption it is and get an infographic. Most people get lost when you ask this question.

        c) It’s also imperative that the keys used to encrypt the data are very well-protected. Understanding how encryption keys are protected is as vital as encryption itself. Are they stored on the same server? Is multi-factor authentication needed to get access to them? Is there a time limit on how long they can have access to the key?

        6) Ensure Vendors Have A Disaster Recovery Program:

        In order to be compliant with the HIPAA Security Rule and related rules, vendors must have a detailed disaster recovery program that includes analysis on how a natural disaster—fire, flood or even a rodent chewing through cables—could affect systems containing ePHI. The plan should also include policies and procedures for operating after a disaster, delineating employees’ roles and responsibilities. Finally, the plan should clearly outline the plan for restoring the data.

        7) Ensure Access Is Based on Legitimate Business Needs:

        Fig 3. Stock Image, RBAC Flow, 2019.

        It’s best to follow the principle of least privilege (POLP), which is the practice of limiting access rights for users to the bare minimum permissions they need to perform their work. Under POLP, users are granted permission to read, write, or execute only the files or resources they need to do their jobs. In other words, the least amount of privilege necessary. RBAC is worth mentioning here again.

        8) Vet All New Vendors with Due Diligence:

        a) Getting references.

        b) Using a standard checklist.

        c) Performing a risk analysis and determining if the vendor will be ranked Critical, High, Medium or Low.

        d) Document and report to senior management.

        Contact us here to learn more.