The Mythos Moment: Why AI Cyber Capabilities Just Crossed the Governance Rubicon

Fig. 1. How Mythos Evolved to Become a Recursive Threat, ChatGPT and Jeremy Swenson, 2026.

In April 2026, a quiet but profound shift occurred in cybersecurity—one that many organizations are still underestimating. Anthropic’s Claude Mythos Preview did not simply advance AI capability. It crossed a threshold. For the first time, a commercially developed model demonstrated the ability to autonomously discover and exploit software vulnerabilities at a near-expert level, including executing multi-step attack chains end-to-end.¹²

This is not incremental progress. It is a structural break. And with that break comes a new reality: the governance, security, and policy frameworks we have relied on are no longer theoretical exercises. They are operational requirements.


From Capability to Consequence—The End of the “Future Risk” Debate:

For years, discussions about AI-enabled cyber offense lived in the realm of hypotheticals—what could happen if models became sufficiently capable. That debate is now over. Mythos achieved a 73% success rate on expert-level capture-the-flag challenges and became the first AI system to complete a full 32-step enterprise network attack simulation.¹ What previously required elite human operators over many hours can now be partially automated.

At the same time, real-world testing has already shown that similar systems can uncover large volumes of previously unknown vulnerabilities. Reports indicate thousands of zero-day findings—including flaws that persisted undetected for decades—are now within reach of AI-assisted discovery.⁹ External validation reinforces this trajectory. A collaboration involving Mozilla used Mythos-like capabilities to identify hundreds of vulnerabilities in Firefox, demonstrating how quickly defensive gains—and offensive risks—can scale simultaneously. This dual-use dynamic is the defining characteristic of the Mythos moment: the same system that strengthens defense can accelerate exploitation.


The Government Contradiction—Risk, Reliance, and Reality:

What makes this moment even more consequential is not just the technology, but the policy response. In March 2026, the U.S. Department of Defense designated Anthropic as a supply chain risk after the company refused to allow unrestricted use of its models for autonomous weapons and surveillance applications.³ This effectively barred Anthropic from Pentagon contracts.

Yet within weeks, reporting confirmed that the National Security Agency—which operates within the same defense ecosystem—was actively using Mythos under controlled access.⁵⁶ At the same time, the Office of Management and Budget began negotiating a framework to deploy a modified version of the model across civilian agencies, including energy and financial regulators.⁷

This creates a striking contradiction:

  • One part of government labels the system a national security risk.
  • Another part actively deploys it.
  • A third is designing policy to scale its adoption.

This is not just bureaucratic inconsistency—it is a preview of how difficult governing frontier AI will be.


The Real Precedent—Governing AI as a Cyberweapon:

What is being negotiated right now matters far beyond Mythos itself. The White House–led framework under development is effectively the first attempt to govern an AI system with cyberweapon-level capabilities, not just data privacy or model safety.

Three emerging principles define this model:

1. Data Sovereignty Sensitive code and infrastructure data must remain within isolated government-controlled environments.

2. Model Integrity Inputs cannot be used to retrain or improve the underlying model, preventing unintended knowledge transfer.

3. Human-in-the-Loop Oversight No autonomous execution—human validation remains mandatory before action.

These are not minor guardrails. They represent the likely baseline for how governments—and eventually regulated industries—will manage high-capability AI systems. If history is any guide, these standards will propagate outward, much like FedRAMP reshaped cloud security procurement. Within 12–18 months, similar requirements are likely to appear in enterprise contracts, regulatory expectations, and audit frameworks.


The Industry Signal—This Is Already Scaling:

The private sector is not waiting. Through Project Glasswing, Anthropic has already deployed Mythos capabilities to a controlled group of major technology and infrastructure organizations, including cloud providers, semiconductor firms, and financial institutions.²

At the same time, companies like Microsoft are moving to integrate similar AI-driven vulnerability discovery into their secure development lifecycles, signaling that this capability will become embedded—not optional—in modern engineering practices. The implication is clear. AI-assisted vulnerability discovery is becoming a standard feature of cybersecurity—not an edge capability.


The Hard Truth—Containment Is Likely Temporary:

Perhaps the most important—and uncomfortable—reality is this:

Containment will not hold indefinitely. History shows that advanced AI capabilities diffuse rapidly. Model architectures leak, competitors replicate breakthroughs, and open-weight alternatives emerge. Even today, non-frontier models can replicate meaningful portions of Mythos-like capability at far lower cost and with fewer restrictions.¹⁴ That means the current environment—where only a limited set of organizations have access—is a temporary window. Organizations that treat this as a policy issue rather than an operational priority are making a critical mistake.


What This Means for Enterprise Leaders:

The Mythos precedent is not a niche technical development. It is a strategic inflection point. Three implications stand out:

1. The Attack Surface Is No Longer Static:

AI compresses the timeline between vulnerability discovery and exploitation from weeks or months to hours. Legacy assumptions—especially around “safe” unpatched systems—are no longer valid.

2. Patch Velocity Becomes a Board-Level Issue:

Organizations with slow remediation cycles are structurally exposed. If critical vulnerabilities can be identified and weaponized faster, governance processes must accelerate accordingly.

3. Defense Must Become Structural, Not Reactive:

Emerging approaches like confidential computing—hardware-isolated execution environments—offer a path to reducing the impact of exploits regardless of discovery speed.

In other words, the goal shifts from “find and fix everything” to “limit what can be compromised at runtime.”


The Strategic Window—Act Before the Curve Flattens:

There is still a narrow window of advantage. Today, frontier capabilities are relatively concentrated. Tomorrow, they will not be. Organizations that move now—by modernizing vulnerability management, accelerating patch cycles, and adopting structural defenses—can get ahead of the curve. Those who wait for regulatory clarity or broader market adoption will likely find themselves reacting under pressure.


Final Thoughts—How to Mitigate These Risks Now:

Here are the most practical, high-impact actions organizations can take right now to mitigate risks associated with advanced AI systems, data exposure, and model misuse—especially in light of incidents like large-scale leaks or “model mythos” exposures:

1) Lock Down Data at the Source:

The most immediate risk reducer is controlling what goes into AI systems in the first place.

  • Classify and tier data (public, internal, confidential, restricted).
  • Prohibit sensitive data (e.g., IP, credentials, client info) from being entered into external AI tools.
  • Implement data loss prevention (DLP) policies across endpoints, SaaS, and APIs.
  • Tokenize or anonymize sensitive datasets before AI usage.

2) Enforce Strong Access Controls:

AI systems often inherit weak identity governance from the broader environment.

  • Apply least privilege access to AI tools, datasets, and model pipelines.
  • Require multi-factor authentication (MFA) everywhere AI is accessed.
  • Monitor and restrict API key usage (rotate keys frequently).
  • Segment environments (dev/test/prod) to prevent lateral movement.

3) Introduce AI-Specific Governance:

Traditional IT governance is not sufficient for AI risk.

  • Stand up a lightweight AI governance council (security, legal, data, business).
  • Define acceptable use policies for generative AI tools.
  • Maintain an AI system inventory (models, vendors, datasets, use cases).
  • Require risk assessments before deploying AI into production.

4) Monitor for Data Leakage and Model Abuse:

You can’t protect what you don’t observe.

  • Log all prompts, outputs, and API interactions (where legally permissible).
  • Deploy behavioral analytics to detect unusual model usage patterns.
  • Scan outputs for sensitive data leakage (prompt injection, exfiltration attempts).
  • Red-team models with adversarial testing scenarios.

5) Harden Third-Party and Vendor Risk:

Many AI risks enter through vendors, not internal builds.

  • Conduct AI-focused vendor due diligence (data handling, training sources, retention policies).
  • Require contractual clauses on: Data ownership Model training boundaries Breach notification timelines.
  • Prefer vendors offering private model instances or zero data retention.

6) Implement Prompt and Output Controls:

The interface layer is a major attack surface.

  • Use prompt filtering and sanitization to block injection attempts.
  • Apply output guardrails to prevent harmful or sensitive responses.
  • Restrict high-risk capabilities (e.g., code execution, system access).
  • Use retrieval-augmented generation (RAG) with vetted internal sources only.

7) Train Employees (Fast, Not Perfect):

Human behavior is still the biggest variable.

  • Roll out short, targeted training on: Safe AI usage, Data handling do’s and don’ts, Prompt injection awareness.
  • Provide approved AI tools so employees don’t default to shadow AI.
  • Reinforce “don’t paste what you wouldn’t email externally”.

8) Prepare for Incident Response:

Assume exposure will happen—speed matters.

  • Update incident response plans to include AI-specific scenarios.
  • Define playbooks for: Data leakage via prompts, Model compromise or abuse, Third-party AI breaches.
  • Run tabletop exercises simulating AI-related incidents.

9) Control Model Inputs and Training Data:

What shapes the model shapes the risk.

  • Vet training datasets for: Sensitive information, Copyright/IP exposure, Bias and integrity issues.
  • Maintain data provenance tracking.
  • Avoid uncontrolled fine-tuning on raw internal data.

10) Start Small with Secure Architectures:

Don’t boil the ocean—secure what’s already in motion.

  • Use private or on-prem AI deployments for sensitive workloads.
  • Isolate AI systems within secure cloud environments.
  • Gate external model access through controlled middleware or APIs.
  • Adopt a “human-in-the-loop” approach for high-risk decisions.

Endnotes:

  1. UK AI Security Institute, “Our Evaluation of Claude Mythos Preview’s Cyber Capabilities,” April 2026.
  2. Anthropic, “Project Glasswing: Securing Critical Software for the AI Era,” April 2026.
  3. CNBC, “Judge Presses DOD on Why Anthropic Was Blacklisted,” March 24, 2026.
  4. CNBC, “Anthropic Loses Appeals Court Bid to Temporarily Block Pentagon Blacklisting,” April 8, 2026.
  5. TechCrunch, “NSA Spies Are Reportedly Using Anthropic’s Mythos,” April 20, 2026.
  6. Axios, “NSA Using Anthropic’s Mythos Despite Defense Department Blacklist,” April 19, 2026.
  7. CSO Online, “White House Moves to Give Federal Agencies Access to Anthropic’s Claude Mythos,” April 2026.
  8. Fortune, “Anthropic Acknowledges Testing New AI Model,” March 26, 2026.
  9. TechCrunch, “Anthropic Debuts Preview of Powerful New AI Model Mythos,” April 7, 2026.
  10. Axios, “Anthropic to Have Peace Talks at White House,” April 17, 2026.
  11. CNBC, “Trump Says He Had ‘No Idea’ About White House Meeting,” April 17, 2026.
  12. Washington Post, “Anthropic CEO Visits White House Amid Hacking Fears,” April 17, 2026.
  13. Council on Foreign Relations, “Six Reasons Claude Mythos Is an Inflection Point,” April 2026.
  14. Evron, Mogull, Lee et al., “The AI Vulnerability Storm: Building a Mythos-Ready Security Program,” CSA/SANS/OWASP, April 2026.

Crypto, Conflict, and Capital Flight: What Iran’s On-Chain Shock Signals for Middle East Economics and U.S. Markets


In late February 2026, shortly after coordinated U.S.–Israeli airstrikes struck targets in Tehran, blockchain analytics firms observed an abrupt spike in cryptocurrency withdrawals from Iran’s largest digital asset exchange. Within minutes of the strikes, Nobitex reportedly experienced a roughly 700 percent surge in withdrawals, with millions of dollars in crypto leaving the platform in a compressed time window.¹ This episode, while modest in absolute global market terms, offers a revealing case study in how digital assets function during geopolitical stress—and what that may signal for Middle East economics and U.S. financial markets over the next year.

A Rapid Withdrawal Shock:

Reporting indicates that nearly $3 million exited Nobitex in a single hour following the strikes, with approximately $10 million leaving Iranian exchanges over several days.² Such flows are small relative to global crypto trading volumes but significant within the Iranian financial context, where capital controls, sanctions, and currency instability already shape economic behavior.

Iran’s domestic currency, the rial, has faced long-standing pressure from inflation, sanctions, and restricted access to global banking networks. In that environment, cryptocurrencies—particularly Bitcoin and dollar-denominated stablecoins—have increasingly served as alternative stores of value and channels for cross-border transfers.³ The surge in withdrawals appears consistent with crisis-driven capital preservation behavior rather than speculative trading alone.

Crypto as a Financial “Pressure Valve”:

The events underscore crypto’s evolving role as a decentralized financial “pressure valve” in sanctioned or conflict-affected economies. When traditional banking rails are constrained or politically vulnerable, digital assets offer relative portability and censorship resistance.¹

Internet blackouts and temporary exchange disruptions complicate interpretation. Outages can cluster transactions when connectivity resumes, making withdrawal spikes appear sharper than underlying demand alone would suggest.³ Nonetheless, the pattern aligns with prior episodes in emerging markets where digital assets gained traction during currency stress.

The lesson is not that crypto replaces sovereign financial systems, but that it increasingly supplements them under strain.

Economic Implications for the Middle East (Next 12 Months):

Looking forward, several dynamics are likely to shape regional economics:

1. Expanded Informal Dollarization via Digital Assets. Sanctioned or financially constrained economies may see broader retail and institutional adoption of dollar-linked stablecoins as parallel monetary tools.

2. Heightened Regulatory and Surveillance Pressure. As crypto flows intersect with sanctions regimes, U.S. and allied regulators are likely to intensify scrutiny of exchanges, custodians, and cross-border blockchain activity.¹

3. Persistent Capital Flight Incentives. Geopolitical volatility increases incentives for households and firms to diversify outside domestic banking systems.

4. Infrastructure Fragility Risks. Internet shutdowns and exchange outages remain structural vulnerabilities in crisis environments.³

Collectively, these forces suggest that digital asset adoption in parts of the Middle East will continue—not as ideological endorsement of crypto, but as pragmatic economic hedging.

What This Means for U.S. Markets:

For U.S. investors and policymakers, the implications extend beyond regional headlines.

Oil and Energy Sensitivity. Any escalation involving Iran carries oil supply risk implications. Even absent sustained disruption, perceived risk premiums can lift energy prices.

Safe-Haven Flows and Dollar Strength. Periods of geopolitical tension historically reinforce demand for U.S. Treasuries and dollar-denominated assets. Concurrently, Bitcoin and gold often experience volatility tied to risk sentiment shifts.⁴

Regulatory Spillover. If crypto is increasingly viewed as a sanctions-adjacent vector, U.S. enforcement posture may tighten, affecting exchanges and institutional investors.

Systemic Interconnectedness. Crypto is no longer a siloed asset class. It is embedded within global liquidity networks. Geopolitical events can trigger rapid on-chain responses that ripple into equities, commodities, and foreign exchange markets.

Forecast—A Converging Risk Landscape:

Over the next year, expect three converging trends:

  1. Greater integration between geopolitical risk modeling and digital asset analytics.
  2. Increased compliance burdens on global crypto infrastructure providers.
  3. Continued volatility transmission across oil, crypto, emerging market currencies, and U.S. equities during regional escalations.

The Iranian withdrawal spike may have involved only millions of dollars—but its significance lies in what it signals: digital capital now moves at the speed of conflict.

For U.S. markets, that means geopolitical shocks increasingly transmit through hybrid financial rails—traditional and decentralized alike. Outside of economic considerations, peace is desirable for the benefit of all.


Bibliography:

  1. Yahoo Finance. “Millions of Dollars in Crypto Left Iranian Exchanges After Airstrikes.” February 2026.
  2. Economic Times. “Why Did Iran’s Largest Crypto Exchange See a 700% Withdrawal Spike Minutes After US–Israel Airstrikes Hit Tehran?” February 2026.
  3. Bitget News. “Iranian Crypto Exchange Records Surge in Withdrawals Following Tehran Strikes.” February 2026.
  4. Forbes. “Iran War, an Oil Crisis, a Crypto Stress Test.” March 2026.

🛡️ Cyberattack on St. Paul Disrupts Systems, Triggers National Guard Response: A Wake-Up Call for City Infrastructure and Public-Private Security

Fig. 1. St. Paul Cyber Attack, St. Paul, 2025.

A major cyberattack brought critical systems across the City of St. Paul to a halt this week, prompting Governor Tim Walz to take the rare step of activating the Minnesota National Guard’s 177th Cyber Protection Team through Executive Order 24-25. The breach, which has yet to be fully disclosed in technical detail, forced the shutdown of municipal networks, libraries, payment systems, and internal applications—raising alarms about the fragility of local government infrastructure in the digital age.

This crisis has not only impacted operations but also exposed deeper vulnerabilities—from disruption of city services to potential legal and evidentiary breakdowns, especially concerning the chain of custody for digital evidence and sensitive case management platforms used by law enforcement and legal teams.

“The cyberattack… has resulted in a disruption of city services and operations, and the city has requested assistance from the State of Minnesota in the form of technical expertise and personnel,” Gov. Walz stated in the executive order. “The incident poses a threat to the delivery of critical government services.” (Walz, 2025)


Legal and Infrastructure Ramifications:

One often overlooked consequence of cyberattacks on public systems is the risk to legal integrity. City governments often store digital evidence for court cases, police body cam footage, and case records within networked systems. When such systems are compromised or taken offline, the chain of custody—a legal requirement for maintaining the integrity of evidence—may be broken. This could lead to dismissed charges, delayed court proceedings, or contested verdicts.

Beyond the courts, St. Paul’s systems underpin essential infrastructure. From 911 backend operations to building permits, utility management, and emergency communications, these disruptions ripple into residents’ lives and civic trust. Any delay in fire dispatch systems, real-time weather alerts, or even payroll processing for emergency responders can escalate into broader crisis.


Why Public-Private Partnerships Are Essential:

The attack illustrates the need for stronger collaboration between public entities and private cybersecurity firms. Municipalities often operate with limited budgets, aging infrastructure, and insufficient security staff. In contrast, private-sector vendors—ranging from cloud security providers to endpoint monitoring specialists—offer scalable defenses and expertise that cities can’t always sustain in-house.

Governor Walz’s executive order underscores this reality, stating:

“Cooperation between the Minnesota Department of Information Technology Services (MNIT), the National Guard, and other partners is necessary to protect public assets and respond to cybersecurity threats.” (Walz, 2025)

This partnership must also extend beyond technical vendors. Insurance carriers, legal risk consultants, and incident response firms should be part of proactive city planning, not just post-breach triage.


The Human Factor: Employee Training Matters:

While technical systems are critical, human error remains the top vector for cyberattacks, especially through phishing and social engineering. A well-crafted phishing email clicked by a single city employee can introduce malware into core systems.

St. Paul’s situation shows how cybersecurity education is no longer optional. Ongoing staff training—including:

  • Simulated phishing attacks
  • Clear escalation protocols
  • “Stop and verify” culture for email attachments and access requests

…is essential. Cities should treat their staff as the first line of defense, not just passive users.


The Road Ahead: What Cities Must Do Now:

The cyberattack on St. Paul should serve as a regional and national inflection point. Other cities must take this as a cue to reassess their cyber posture through the following:

Strategic Priorities:

  1. Zero Trust Implementation Limit internal access and require constant authentication, even for trusted users.
  2. Third-Party Risk Audits Review vendors, contractors, and outsourced services for security gaps.
  3. Resilient Backup and Recovery Ensure data is stored offsite and tested regularly for recovery readiness.
  4. Legal and Digital Forensics Planning Build frameworks for protecting the chain of custody in case of breach.
  5. Integrated Public-Private Playbooks Define shared roles between city staff, Guard units, and private partners in cyber response drills.
  6. Community Transparency Proactively inform the public about risks, responses, and what’s being done to rebuild digital trust.

Final Thoughts:

The breach in St. Paul is not just a local IT issue—it is a civic security event that affects courts, emergency services, legal integrity, and public confidence. Governor Walz’s activation of the National Guard is a bold signal that digital defense is now a matter of public safety.

“Immediate action is necessary to provide technical support and ensure continuity of operations,” reads Executive Order 24-25 (Walz, 2025).

Moving forward, public-private partnerships, cybersecurity training, and legal readiness must become foundational to how cities govern in the digital era. The stakes are no longer theoretical—they are real, operational, and deeply human.


References:

  1. FOX 9. (2025, July 29). Gov. Walz activates National Guard after cyberattack on city of St. Paul. https://www.fox9.com/news/gov-walz-activates-national-guard-after-cyberattack-st-paul
  2. KSTP. (2025, July 29). City of St. Paul experiencing unplanned technology disruptions. https://kstp.com/kstp-news/top-news/city-of-st-paul-experiencing-unplanned-technology-disruptions/
  3. League of Minnesota Cities. (2024, October). Cybersecurity Incident Reporting Requirements for Cities. https://www.lmc.org/news-publications/news/all/fonl-cybersecurity-incident-reporting-requirements/
  4. Reddit. (2025, July 29). Minnesota National Guard activated after city cyberattack [Discussion threads]. https://www.reddit.com/r/minnesota
  5. Walz, T. (2025, July 29). Executive Order 24-25: Activating the Minnesota National Guard Cyber Protection Team. Office of the Governor, State of Minnesota. https://mn.gov/governor/assets/EO-24-25_tcm1055-621842.pdf

About the Author:

Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. Over 17 years, he has held progressive roles at many banks, insurance companies, retailers, healthcare organizations, and even government entities. Organizations appreciate his talent for bridging gaps, uncovering hidden risk management solutions, and simultaneously enhancing processes. He is a frequent speaker, podcaster, and a published writer – CISA Magazine and the ISSA Journal, among others. He holds a certificate in Media Technology from Oxford University’s Media Policy Summer Institute, an MBA from Saint Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Cyber Security Summit Think Tank , the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy. He also has certifications from Intel and the Department of Homeland Security.

Esports Cyber Threats and Mitigations

Esports Cyber Threats and Mitigations:

On 06/10/21 major Esports software company, Electronic Arts (EA) was hacked. They are one of the biggest esports companies in the world. They count many major hit games including Battlefield, The Sims, Titanfall, and Star Wars: Jedi Fallen Order, in addition to many online league sports games; and they develop and/or publish many others. An EA spokesperson described game code and related tools as stolen in the hack and that they are still investigating the privacy implications. Early reports however indicated that a whopping 780GB of data was stolen (Balaji N, GBHackers On Security, 06/12/21).

Fig 1. EA Sports Hacked Image. Balaji N, GBHackers On Security, 06/12/21.

Given this recent hack here is an updated overview of some of the esports cyber threats and mitigations.

Threats:

1. Aimbots and Wallhacks

As esports revenues and player prizes increase, more players will look for opportunities to exploit the game to gain an advantage over competitors. Many underground hacker forums reveal hundreds of aimbots and wallhacks. Prices for such tools start as low as $5.00 but go as high as $2,000. These are essentially cheat tools for sale but they are technically prohibited in official competitions (Trend Micro, 2019).

Aimbots are a type of software used in multiplayer first-person shooter games to provide varying levels of automated targeting that gives the user an advantage over other players. Wallhacks allow the player to change the properties of in-game walls by making them transparent or nonsolid, making it easier to find or attack enemies.

Fig 2. Wallhack Cheat For WarZone (May 6th 2020, Tom Warren).

No alt text provided for this image
Fig 2. Wallhack Cheat For WarZone (May 6th 2020, Tom Warren).

2. Hidden Hardware Hacks

Some of the hardware used in competitions can be manipulated by hackers with ease. For each tournament, a gaming board sets the rules on what equipment they allow tournament participants to use. A lot of professional tournaments allow players to bring their own mouse and keyboard, which have been known to house hacks.

Case in point, in 2018 a Dota 2 team was disqualified from a $15 million tournament after judges caught one of its members using a programmable mouse – the Synapse 3 configuration tool. The mouse allowed the player to perform movements that would be impossible without macros, a shortcut of preset key sequences not possible with standard nonprogrammable hardware (Trend Micro, 2019).

3. Stolen Accounts and Credentials

Threat actors have been increasingly targeting the esports industry. They do this by harvesting and selling user ID and password data of both internal and external systems for esports companies. A study by threat intelligence company KELA indicated that more than half a million login credentials tied to the employees of 25 leading game publishers have been found for sale on dark web bazaars (Amer Owaida, Welivewellsecurity, 01/05/2021).

4. Ransomware and DDoS (Distributed Denial of Services) Attacks

Ransomware can come via phishing, smishing, spam, or via free compromised plug-ins. When installed on the gaming platform they lock everything up and force the host to pay ransom in the form of difficult-to-trace digital currency like Bitcoin. Interestingly, researcher Danny Palmer of ZDnet cited Trend Micro’s research when he described the marriage of ransomware and DDoS attacks as follows:

“Researchers also warn that attackers could blackmail esports tournament organizers, demanding a ransom payment in exchange for not launching a DDoS attack – something which organizers might consider given how events are broadcast live and the reputational damage that will occur to the host organizer if the event gets taken offline” (Danny Palmer, ZDnet, 10/29/2019).

Mitigations:

1. Use a VPN (Virtual Private Network)

VPN establishes an encrypted tunnel between you and a remote server ran by the VPN provider. All your internet traffic is run through this tunnel, so your data is secure from eavesdropping. Your real IP address and location is masked preventing IPS tracking as your traffic is exiting the VPN server. You can also more confidently use public WIFI with a VPN.

2. Use A Password Management Tool and Strong Passwords

Another way to stay safe is by setting passwords that are longer, complex, and thus hard to guess. Additionally, they can be stored and encrypted for safekeeping using a well-regarded password vault and management tool. This tool can also help you to set strong passwords and can auto-fill them with each login — if you select that option. Yet using just the password vaulting tool is all that is recommended. Doing these two things makes it difficult for hackers to steal passwords or access your gaming accounts.

3. Use Only Whitelisted Gaming Sites Not Blacklisted Ones or Ones Found Via the Dark Web

Use only approved whitelisted gaming platforms and sites that do not expose you to data leakages or intrusion on your privacy. Whitelisting is the practice of explicitly allowing some identified websites access to a particular privilege, service, or access. Blacklisting is blocking certain sites or privileges. If a site does not assure your privacy, do not even sign up let alone participate.

Chinese Hackers Stole About 614GB of Data from Unnamed U.S. Navy Contractor

A series of cyber attacks backed by Chinese government hackers earlier this year infiltrated the computers of a U.S. Navy contractor, allowing a large amount of highly-sensitive data on undersea warfare to reportedly be stolen. Likely by A People’s Liberation Army unit, known as Unit 61398, which is filled with skilled Chinese hackers who pilfered corporate trade secrets to benefit Chinese state-owned industry. The breaches, which took place in January and February 2018, including secret plans to develop a supersonic anti-ship missile for use on US submarines by 2020, according to American officials.

This data was of a highly sensitive nature despite it being housed on the contractor’s unclassified network – putting it here was mistake and exacerbated vulnerabilities. A contractor who works for the Naval Undersea Warfare Center in Newport, R.I. — a research and development center for submarines and underwater weaponry — was the target of the hackers, the Post reported. While the unnamed officials did not identify the contractor, they told the newspaper that a total of 614 gigabytes of material was taken. Included in that data was information about a secret project known as Sea Dragon, in addition to signals and sensor data and the Navy submarine development unit’s electronic warfare library. The Washington Post said it agreed to withhold some details of what was stolen at the request of the U.S. Navy over fears it could compromise national security.

A Navy spokesperson told Fox News in a statement the service branch will not comment on specific incidents, but cyber threats are “serious matters” officials are working to “continuously” bolster awareness of. There are measures in place that require companies to notify the government when a cyber incident has occurred that has actual or potential adverse effects on their networks that contain controlled unclassified information,” Cmdr. Bill Speaks said. “It would be inappropriate to discuss further details at this time.”Military experts fear that China has developed capabilities that could complicate the Navy’s ability to defend US allies in Asia in the event of a conflict with China. The Chinese are investing in a range of platforms, including quieter submarines armed with increasingly sophisticated weapons and new sensors, Admiral Philip Davidson said during his April nomination hearing to lead US Indo-Pacific Command. And what they cannot develop on their own, they steal – often through cyberspace, he said. “One of the main concerns that we have,” he told the Senate Armed Services Committee, “is cyber and penetration of the dot-com networks, exploiting technology from our defense contractors, in some instances.”

Chinese government hackers have previously targeted information on the U.S. military, including designs for the F-35 joint strike fighter which they copied. Last year, South Korean firms involved in the deployment of the U.S. Army’s Terminal High-Altitude Area Defense, or THAAD, missile defense system, the Wall Street Journal reported at the time. No matter how fast the government moves to shore up its cyber defenses, and those of the defense industrial base, the cyber attackers move faster.

Compiled from Jennifer Griffin at Fox News, The Post, The Wall Street Journal, Independent News, and Huff Post. Edited and curated by Jeremy Swenson of Abstract Forward Consulting.