Foreign Threat Actors Amplify Disinformation Ahead of 2024 U.S. Election, Warn FBI and CISA

Minneapolis—

As the 2024 U.S. general election nears, the FBI and CISA have issued a public service announcement to alert the public about foreign disinformation campaigns.[1] These campaigns, led by foreign adversaries, aim to undermine voter confidence by spreading false narratives before, during, and after Election Day. Despite these efforts, the FBI and CISA confirm that there is no evidence of malicious cyber activity compromising U.S. election infrastructure, including voter registration systems, ballots, or vote-counting processes.

Evolving Disinformation Tactics with AI:

The disinformation campaigns have become more sophisticated due to the use of generative AI tools, which allow foreign actors to create convincing fake content, such as AI-generated articles, deepfake videos, and synthetic media.[2] These false narratives are then spread across multiple platforms, both in the U.S. and abroad. By lowering the barrier for creating and distributing disinformation, AI has made it easier for foreign actors to mislead the public and erode trust in the election process.

Disinformation Campaigns from Russia and Iran:

Russia and Iran are identified as the primary foreign actors behind many of these disinformation efforts. Russian operatives have set up AI-enhanced social media bot farms and cybersquatted on domains mimicking legitimate news websites, such as “washingtonpost.pm” and “foxnews.in,” to disseminate propaganda. The DOJ responded by seizing over 30 of these domains and indicting individuals linked to Russian government-controlled media outlets that covertly funded U.S. influence campaigns.

Iran, too, has engaged in similar efforts, with recent DOJ charges against Iranian nationals accused of hacking and leaking U.S. campaign materials to manipulate the election outcome.

Public Recommendations:

To help combat the spread of disinformation, FBI and CISA urge the public to:

  • Educate themselves about foreign influence operations, especially AI-generated content.
  • Rely on trusted sources, such as state and local election officials, to verify election-related claims.
  • Understand AI-generated content by looking for clues that content may be doctored or synthetic.
  • Report suspicious activity or disinformation attempts to the FBI.

Election Security Efforts:

Federal, state, and local authorities are collaborating to safeguard U.S. elections. The FBI investigates election crimes and foreign influence campaigns, while CISA works to secure election infrastructure. Jen Easterly, director of CISA, has reassured voters that the systems are more secure than ever, with robust cybersecurity measures in place, including paper ballot records that verify vote counts in 97% of jurisdictions.

Easterly emphasized that, although foreign adversaries will continue to attempt to influence U.S. elections, they will not be able to alter the final outcome. She also encouraged patience as election results may take time to finalize and urged the public to trust official sources. Being an election judge is not a bad idea either.

Conclusion:

As Election Day approaches, foreign disinformation campaigns remain a threat, but significant efforts have been made to secure the election process. With the support of informed voters and coordinated efforts from election officials, the integrity of U.S. elections can be maintained. We in the private sector need to share and support these efforts, as CISA, and the FBI cannot be everywhere.

About the Author:

Jeremy A. Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and seasoned senior management tech risk and digital strategy consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds a certificate in Media Technology from Oxford University’s Media Policy Summer Institute, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota’s Technological Leadership Institute, an MBA from Saint Mary’s University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale, and New Hope Community Police Academy (MN), and the Minneapolis FBI Citizens Academy. You can follow him on LinkedIn and Twitter.


[1] CISA. “FBI and CISA Issue Public Service Announcement Warning of Tactics Foreign Threat Actors are Using to Spread Disinformation in the 2024 U.S. General Election.” 10/18/24. https://www.cisa.gov/news-events/news/fbi-and-cisa-issue-public-service-announcement-warning-tactics-foreign-threat-actors-are-using

[2] CISA. “FBI and CISA Issue Public Service Announcement Warning of Tactics Foreign Threat Actors are Using to Spread Disinformation in the 2024 U.S. General Election.” 10/18/24. https://www.cisa.gov/news-events/news/fbi-and-cisa-issue-public-service-announcement-warning-tactics-foreign-threat-actors-are-using

Navigating the Future of Media, Law, and AI: Reflections on the 2024 Oxford Media Policy Summer Institute

Fig 1. Jeremy Swenson at the 2024 Oxford Media Policy Summer Institute, 2024.

#medialaw #oxford #mediaethics #airegulation #aipolicy #techethics #oversightboard #techrisk. #web3 #blockchain #techcensorship #contentmoderation Oxford Media Policy Summer Institute Centre for Socio-Legal Studies, University of Oxford Faculty of Law, University of Oxford

Minneapolis

The Oxford Media Policy Summer Institute[1], held annually for over twenty-five years in person in Oxford, UK, is a prestigious program that unites leading communications scholars, media lawyers, regulators, human rights activists, technologists, and policymakers from around the globe. As an integral part of Oxford’s Centre for Socio-Legal Studies and the Faculty of Law, specifically through the Program in Comparative Media Law and Policy (PCMLP), the Institute fosters a global and multidisciplinary understanding of the complex relationships between technology, media, and policy. It aims to broaden the pool of talented scholars and practitioners, connect them to elite professionals, facilitate interdisciplinary dialogue, and build a space for future collaborations. With over 40 participants from more than 20 countries, the Institute provides an unparalleled opportunity to engage with diverse experiences and media environments. Its alumni network, comprising leaders in government, corporations, non-profits, and academia, remains vibrant and collaborative long after the program concludes.

Reflecting on my completion of the 2024 Oxford Media Policy Summer Institute, I am struck by the depth of knowledge I gained, particularly in the areas of media, tech and diversity, and AI policy. One of the most enlightening discussions revolved around the EU’s approach to regulating platforms like Facebook, Twitter, and Google. The EU has been at the forefront of creating frameworks that balance the need for free expression with the imperative to curb harmful content. I learned about the evolving regulatory landscape, including the Digital Services Act (DSA)—which addresses content moderation, online targeted advertising, and the configuration of online interfaces and recommender systems; and the Online Safety Bill—which seeks to hold tech giants accountable for the content on their platforms. These discussions highlighted the increasing importance of the “Fifth Estate,” a concept coined by William H. Dutton, referring to the networked individuals who, through the Internet, are empowering themselves in ways that challenge the control of information by traditional institutions.[2] The EU’s policies aim to regulate this new power dynamic while protecting vulnerable users and ensuring transparency and accountability.

Fig. 2. The 2024 Cohort of the Oxford Media Policy Summer Institute, 2024.

The Institute also provided invaluable insights into AI types, elections, and content moderation in the Global South. The discussions on the Global South’s technological maturity and policy governance revealed significant gaps in infrastructure, regulation, and policy. These challenges are evident in cases of internet censorship and shutdowns during political unrest, as well as instances of election manipulation. However, I also learned about innovative approaches being developed across the continent, which could serve as models for other regions. One such approach is a proposed third-wave model of tech governance that emphasizes local context, community involvement, and adaptive regulation.[3] This model would be more responsive to the unique challenges faced by countries in the Global South, including the need to balance development goals with the protection of human rights, ensuring they are not overpowered by the tech giants, which are primarily U.S.-based. This new model aligns with the idea of the Fifth Estate, as it seeks to empower local communities and their digital influence.

A particularly compelling aspect of the Institute was the examination of Meta’s Oversight Board and its role in protecting human rights amid global tech acceleration.[4] The Oversight Board represents a novel approach to content moderation, offering a degree of independence and transparency that is rare among tech companies. However, the discussions also highlighted the challenges the Board faces, including its limited jurisdiction and the broader question of how to ensure that human rights are upheld in an era of rapid technological change. Then there is the question of if it’s funded by Meta how can it be truly independent?

The need for stronger international frameworks and greater cooperation among stakeholders was a recurring theme, underscoring the importance of global collaboration in addressing these challenges. The Fifth Estate plays a critical role here as well, as the collective influence of networked individuals and organizations can push for greater accountability and human rights protections in the digital age.

Fig. 3. One of many group discussions, 2024.

The issue of foreign information manipulation, particularly disinformation campaigns designed to interfere with elections, was another critical topic. The example of Russia’s interference in U.S. and Ukrainian elections served as a stark reminder of the power of disinformation in destabilizing democracies.[5] The discussions at the Institute underscored the need for robust strategies to counter such threats, including better coordination between governments, tech companies, and civil society. Cybersecurity emerged as a key area of focus, particularly in ensuring the integrity of information in an age where AI is increasingly used to create and spread false narratives.

The role of the U.S. Federal Communications Commission (FCC) in shaping the future of AI and media policy was also a major point of discussion.[6] I gained a deeper understanding of the FCC’s mandate, particularly its focus on ensuring fair competition, protecting consumers, and promoting innovation. The FCC’s approach to AI reflects cautious optimism, recognizing the potential benefits of AI while also acknowledging the need for regulation to prevent abuses. The discussions highlighted the importance of balancing innovation with the need to protect the public from potential harms, particularly in areas such as privacy and data security.

Finally, the Institute emphasized the critical role of cybersecurity in maintaining information trust, especially against the backdrop of emerging AI technologies, which I detailed in my presentation (Fig 4). This included an overview of both the new NIST Cyber Security Framework (CSF) 2.0, which includes governance, and the NIST AI Risk Management Framework (RMF)—its lifecycle swim lanes with a description of the inputs and outputs. As AI becomes more sophisticated, the potential for malicious use grows, making cybersecurity a vital component of any strategy to protect information integrity. The discussions reinforced the idea that cybersecurity must be integrated into all aspects of tech policy, from content moderation to data protection, to ensure that AI is used responsibly.

Fig 4. Jeremy Swenson Presenting Eight Artificial Intelligence (AI) Cyber-Tech Observations, 2024.

In conclusion, my experience at the 2024 Oxford Media Policy Summer Institute was truly impactful. It underscored the significance of inclusivity, collaborative technological innovation, and the vital role of private sector competition in advancing progress. The recurring focus on the growth of the Global South’s tech economy emphasized the need for adaptable and locally tailored regulatory frameworks. As AI continues to develop, the urgency for comprehensive regulation and risk management frameworks is becoming increasingly evident. However, in many areas, it is still too early for definitive solutions, highlighting the necessity for ongoing research and learning.

There is a clear need for independent entities to provide checks and balances on big tech, with the Facebook Oversight Board serving as a promising start, though much more remains to be done. The strength and independence of journalism and free speech are undermined if they are weakened by misinformed platforms or overreaching governments. Network shutdowns and censorship should be rare, thoroughly justified, and subject to transparent auditing. The Institute has provided me with knowledge of the key stakeholders and their dependencies and levels of regulation. Importantly, I obtained key connections across the globe to engage meaningfully in these critical discussions, and I am eager to apply these insights in my future endeavors, be it a tech start-up, writing, or business advisory.

Last but not least, a big thanks to my esteemed fellow classmates this year. I could not have done it so well without all of you; thanks and much respect!

Ashwini Natesan for always correctly offering the Sri Lankan perspective. Martin Fertmann for shedding light on social media oversight. Erik Longo for offering insight on the DSA and related cyber risk. Davor Ljubenkov for the emerging tech and automation insight.Carolyn Khoo for insight on ‘The Korean Wave’. Purevsuren Boldkhuyag for the Asian legal and communication insight. Elena Perotti for the on-point public policy insight. Brandie Lustbader for winning a key legal issue and setting the example of justice and free speech in media. Jan Tancinco for the great insight on video and digital content strategy and innovation with the Prince reference! Thorin Bristow for your great article “Views on AI aren’t binary – they’re plural”. Eirliani Abdul Rahman for your insight on social media and digital AI from many orgs. Hafidz Hakimi ,Ph.D for the Malaysian legal perspective. Vinti Agarwal for the Indian legal view of e-sports/gaming. Numa Dhamani for your insight on AI, tech, and book writing. Bastian Scibbe for your insight on data protection and digital rights. John Okande for the Kenyan perspective on tech governance and policy. Ivana Bjelic Vucinic for the insight on the Global Forum for Media Development (GFMD). Ibrahim Sabra for insight on digital expression and social justice. Mesfin Fikre Woldmariam for the Ethiopian perspective on tech governance and free speech. Katie Mellinger for the FCC knowledge. Margareth Kang for the Brazilian tech public policy insight. Luise Eder for helping organize and lead all of this among a bunch of crafty intellectuals. Nicole Stremlau for leading such a diverse and important agenda at a time when it is so relevant. Thanks to everyone else as well.

About the Author:

Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds a certificate in Media Tech Policy from Oxford University. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

References:


[1] University of Oxford. “Oxford Media Policy Summer Institute”. 2024. https://pcmlp.socleg.ox.ac.uk/oxford-media-policy-summer-institute-2024/

[2] Dutton, William. “The fifth estate: the power shift of the digital age.” Oxford University Press. 2023. https://www.tandfonline.com/doi/full/10.1080/1369118X.2024.2343811

[3] Flew, T., & Lin, F. “The third way of global Internet governance: A dialogue with Terry Flew.” Communication and the Public, 7(3). 2022. https://journals.sagepub.com/doi/full/10.1177/20570473221123150

[4] Meta. “The Oversight Board”. 2024. https://www.oversightboard.com/

[5] Tucker, Eric. “US disrupts Russian government-backed disinformation campaign that relied on AI technology”. AP. 2024. https://apnews.com/article/russia-disinformation-fbi-justice-department-50910729878377c0bf64a916983dbe44

[6] FCC. “The Opportunities and Challenges of Artificial Intelligence for Communications Networks and Consumers.” 2023. https://www.fcc.gov/fcc-nsf-ai-workshop

Seven Cyber-Tech Observations of 2022 and What it Means for 2023.

Minneapolis 01/17/23

cryptonews #cyberrisk #techrisk #techinnovation #techyearinreview #ftxfraud #googlemandiant #infosec #musktwitter #twitterfiles #disinformation #cio #ciso #cto

By Jeremy Swenson

Summary:

Fig. 1. 2022 Cyber Year in Review Mashup; Stock, 2023.

The pandemic continues to be a big part of the catalyst for digital transformation in tech automation, identity and access management (IAM), big data, collaboration tools, artificial intelligence (AI), and increasingly the supply chain. Disinformation efforts morphed and grew last year with stronger crypto tie ins challenging data and culture; Twitter hype pump and dumps for example. Additionally, cryptocurrency-based money laundering, fraud, and Ponzi schemes increased partly due to weaknesses in the fintech ecosystem around compliance, coin splitting/mixing fog, and IAM complexity. This requires better blacklisting by crypto exchanges and banks to stop these illicit transactions erroring on the side of compliance, and it requires us to pay more attention to knowing and monitoring our own social media baselines.

The Costa Rican Government was forced to declare a national emergency on 05/08/22 because the Conti Ransomware intrusion had extended to most of its governmental entities. This was a more advanced and persistent ransomware with Russian gang ties (Associated Press; NBC News, 06/17/22). This highlights the need for smaller countries to better partner with private infrastructure providers and to test for worst-case scenarios.

We no longer have the same office due to mass work from home (WFH) and the mass resignation/gig economy. This infers increased automated zero-trust policies and tools for IAM with less physical badge access required. The security perimeter is now more defined by data analytics than physical/digital boundaries. Education and awareness around the review and removal of non-essential mobile apps grows as a top priority as mobile apps multiply. All the while, data breaches, and ransomware reach an all-time high while costing more to mitigate. Lastly, all these things make the Google acquisition of Mandiant more relevant and plausibly one of the most powerful security analytics and digital investigation entities in the world rivaling nation-state intelligence agencies.

Intro:

Every year I like to research and commentate on the most impactful security technology and business happenings from the prior year. This year is unique since crypto money laundering via splitting/mixing, disinformation, the pandemic, and mass resignation/gig economy continue to be a large part of the catalyst for most of these trends. All these trends are likely to significantly impact small businesses, government, education, high-tech, and large enterprise in big and small ways.

1) The Main Purpose of Cryptocurrency Mixer and/or Splitter Services is Fraud and Money Laundering.

Cryptocurrency mixer and/or splitter services serve no valid “real-world” ethical business use case considering the relevant fintech and legal options open. Even in the very rare case when you are a refugee fleeing a financially abusive government regime or a terrorist organization is seeking to steal your assets while the national currency is failing, like in Venezuela, which I wrote about in my 2014 article, “Thought$ On The Future of Digital Curren¢y For A Better World” – that is about political revolution and your personal safety more than anything else. Although cases like this give a valid reason why you might want to mix and/or split your crypto assets, that is not fully the same use case we’re talking about here with the recent uptick of ill-intended crypto mixer and/or splitter service use. Therefore, it’s only fair that we discuss the most likely and common use case, which is trending up, and not the few rare edge cases. This use case would be fraud, Ponzi schemes, and money laundering.

The evidence does not support that a regular crypto exchange is the same thing as a mixer and/or splitter service. For definition’s sake, I am not defining mixing and/or splitting cryptocurrency as the same thing as selling, buying, or converting it – all of this can be done on one or more of the crypto exchanges which is why they are called exchanges. If they are the same or even considerably similar, then why are people and orgs using the mixer and/or splitter services at all? They use them because they offer a considerably different service. Using a mixer and/or splitter service assumes you have gotten some crypto beforehand, from a separate exchange – a step or more before in the daisy chain. This can be done via legal or illegal means. Moreover, why are people paying repeated and hugely excessive fees for these services? The fees are out of line with anything possibly comparable because there is higher compliance and legal risk for the operators of them in that they could get sanctioned like Blender-IO, FTX, Coinbase, Gemini, and others.

You can still have privacy if that is what you are seeking via a semblance of legal moves such as a trust tied to a separate legal entity, family office entity, converting to real estate, and marriage entity – if you have time to do the paperwork. Legally savvy people have anonymity over their assets often to avoid fraudsters, sales reps, and just privacy for privacy’s sake – but again still not the same use case. Even when people/orgs use these legal instruments for privacy, they still have compliance reporting and tax obligations – some disclosure. Keep in mind some disclosure serves to protect you, that you in fact own the assets you say you own. Using these legal instruments with the right technical security including an encrypted VPN and multifactor authentication serves to sustain privacy, and you will then not need a crypto mixer and/or splitter.

Yet if you had cryptocurrency and wanted strong privacy to protect your assets, why would you not at least use some of the aforementioned legal instruments or the like? Mostly because any attorney worth anything would be obligated to report this blatant suspected fraud, and would not want to tarnish their name on the filings, etc. Specifically, the attorney would have to see and know where and what entities the crypto was coming from and going to, under what contexts, and that could trigger them to report or refuse to work with them – a fraudster would want to avoid getting detected.

Specifically, the use of multiple legal entities in different countries in a daisy chain of crypto coin mixing and/or splitting tends to be the pattern for persistent fraud and money laundering. That was the case in the $4.5-billion-dollar crypto theft out of NY (Crocodile of Wall Street), the Blender mixing fraud, and many other cases.

A recent May 2022 U.S. Treasury press release concerning mixer service money laundering described it this way (Dept of Treasury; Press Release, 05/06/22):

“Blended.io (Blender) is a virtual currency mixer that operates on the Bitcoin blockchain and indiscriminately facilitates illicit transactions by obfuscating their origin, destination, and counterparties. Blender receives a variety of transactions and mixes them together before transmitting them to their ultimate destinations. While the purported purpose is to increase privacy, mixers like Blender are commonly used by illicit actors. Blender has helped transfer more than $500 million worth of Bitcoin since its creation in 2017. Blender was used in the laundering process for DPRK’s Axie Infinity heist, processing over $20.5 million in illicit proceeds.”

Fig 2. U.S. Treasury Dept; Blener.io Crypto Mixer Fraud, 2022.

The question we as a society should be thinking about is tech ethics. What design feature crosses the line to enable fraud too much such that it is not pursued? For example, Silk Road crossed the line, selling illegal drugs, extortion, and other crime. Hacker networks cross the line when they breach companies and steal their credit card data and put it for sale on the dark web. Facebook crossed the line when it enabled bias and undue favor to impact policy outcomes.

Crypto mixer and/or splitter services (not mere crypto exchanges) are about as close to “money laundering as a service” as it gets – relative to anything else technically available excluding the dark web where there are far worse things available technically. Obviously, the developers, product owners, and project managers behind the crypto mixer and/or splitter services like this are serving the fraud and money laundering use case more than anything else. Some semblance of the organized crime rings is very likely giving them money and direction to this end.

If you are for and use mixer and/or splitter services then you run the risk of having your digital assets mixed with dirty digital assets, you have extortion high fees, you have zero customer service, no regulatory protection, no decedent Terms of Service and/or Privacy Policy if any, and you have no guarantee that it will even work the way you think it will.

In fact, you have so much decentralized “so-called” privacy that it could work against you. For example, imagine you pay the high fees to mix and split your crypto multiple times, and then your crypto is stolen by one of the mixing and/or splitting services. This is likely because they know many of their customers are committing fraud and money laundering; yet even if they are not these platforms are associated with that. Therefore, if the platform operators steal their crypto in this process, the victims have little incentive to speak up. Moreover, the mixing and/or splitting service companies have a nice cover to steal it, privacy. They won’t admit that they stole it but will say something like “everything is private and so we can’t see or know but you are responsible for what private assets you have or don’t have”. They will say something like “stealing it is impossible” which of course is a complete lie.

In sum, what reason do you have to trust a crypto mixing and/or splitting service with your digital assets as outlined above as they are hardly incentivized to protect them or you and operate in the shadows of antiquated non-western fintech regulation. So what really do you get besides likely fraud? What is the business rationale behind using these services as outlined above considering no solid argument or evidence can support it is privacy alone, and what net benefit do you get besides business-enabling money laundering and fraud?

Now there are valid use cases for crypto and blockchain technology generally and here are five of them:

1.      Innovative tech removing the central bank for peer-to-peer exchange that is faster and more global, especially helping the underbanked countries.

2.      Smart contracts can be built on blockchain.

3.      Blockchain can be used for crowdfunding.

4.      Blockchain can be used for decentralized storage.

5.      The traditional cash and coin supply chain is burdensomely wasteful, costly, dirty, and counterfeiting is a real issue. Why do you need to carry ten dollars in quarters or a wad of twenty-dollar bills or even have that be a nation’s economic backing in today’s tech world?

Here are six tips to identify crypto-related scams:

1.      With most businesses, it should be easy to find out who the key operators are. If you can’t find out who is running a cryptocurrency or exchange via LinkedIn, Medium, Twitter, a website, or the like be very cautious.

2.      Whether in cash or cryptocurrency, any business opportunity promising free money is likely to be fake. If it sounds too good to be true it likely is. Multi-level marketing is one old example of this scam.

3.      Never mix online dating and investment/financial advice. If you meet someone on a dating site or social media app, and then they want to show you how to invest in crypto or they ask you to send them crypto. No matter what sob story and huge return they are claiming it’s a scam (FTC).

4.      Watch out for scammers who pretend to be celebrities who can multiply any cryptocurrency you send them. If you click on an unexpected link they send or send cryptocurrency to a so-called celebrity’s QR code, that money will go straight to a scammer, and it’ll be gone. Celebrities don’t have time to contact random people on social media, but they are easily impersonated (FTC).

5.      Celebrities are however used to pump crypto prices via social media, so they get a windfall, and everyone else takes a hit. Watch out for crypto like Dogecoin which is heavily tied to celebrity pumps with no real-world business value. If you are lucky enough to get ahead, get out then.

6.      Watch out for scammers who make big claims without details, white papers, filings, or explanations at all. No matter what the investment, find out how it works and ask questions about where your money is going. Honest investment managers or advisors want to share that information and will back it up with details in many documents and filings (FTC). 

2) Disinformation Efforts Are Further Exposed:

Disinformation has not slowed down any in 2022 due to sustained advancements in communications technologies, the growth of large social media networks, and the “appification” of everything thereby increasing the ease and capability of disinformation. Disinformation is defined as incorrect information intended to mislead or disrupt, especially propaganda issued by a government organization to a rival power or the media. For example, governments creating digital hate mobs to smear key activists or journalists, suppress dissent, undermine political opponents, spread lies, and control public opinion (Shelly Banjo; Bloomberg, 05/18/2019).

Today’s disinformation war is largely digital via platforms like Facebook, Twitter, Instagram, Reddit, WhatsApp, Yelp, Tik-tok, SMS text messages, and many other lesser-known apps. Yet even state-sponsored and private news organizations are increasingly the weapon of choice, creating a false sense of validity. Undeniably, the battlefield is wherever many followers reside. 

Bots and botnets are often behind the spread of disinformation, complicating efforts to trace and stop it. Further complicating this phenomenon is the number of app-to-app permissions. For example, the CNN and Twitter apps having permission to post to Facebook and then Facebook having permission to post to WordPress and then WordPress posting to Reddit, or any combination like this. Not only does this make it hard to identify the chain of custody and original source, but it also weakens privacy and security due to the many authentication permissions involved. The copied data is duplicated at each of these layers, which is an additional consideration.

We all know that false news spreads faster than real news most of the time, largely because it is sensationalized. Since most disinformation draws in viewers which drives clicks and ad revenues; it is a money-making machine. If you can significantly control what’s trending in the news and/or social media, it impacts how many people will believe it. This in turn impacts how many people will act on that belief, good or bad. This is exacerbated when combined with human bias or irrational emotion.

In 2022 there were many cases of fake crypto initial coin offerings (ICOs) and related scams including the Titanium Blockchain where investors lost at least $21 million (Dept of Justice; Press Release, 07/25/22). The Celsius’ crypto lending platform also came tumbling down largely because it was a social media-hyped Ponzi scheme (CNBC; Arjun Kharpal, 07/08/22). This negatively impacts culture by setting a misguided example of what is acceptable.

Elon Musk’s controversial purchase of Twitter for $44 billion in October 2022 resulted in a big management shakeup and strategy change (New York Times; Kate Conger and Lauren Hirsch, 10/27/22). The goal was to reduce bias and misinformation in the name of free and fair speech. To this end, the new Twitter under Musk’s direction produced “The Twitter Files” which are a set of internal Twitter, Inc documents made public beginning in December 2022. This was done with the help of independent journalists Matt Taibbi, Bari Weiss, Lee Fang, and authors Michael Shellenberger, David Zweig and Alex Berenson.

The sixth release of the Twitter Files was on 12/12/22 and revealed (Real Clear Politics; Kalev Leetaru, 12/20/22):

“Twitter granted great deference to government agencies and select outside organizations. While any Twitter user can report a tweet for removal, officials at the platform provided more direct and expedited channels for select organizations, raising obvious ethical questions about the government’s non-public efforts at censorship. It also captured the degree to which law enforcement requested information – from the physical location of users to foreign influence – from social platforms outside of formal court orders, raising important questions of due process and accountability.”

Fig. 3. Elon Musk Twitter Freedom of Speech Mash Up; Stock / Getty, 2022.

With the help of Twitter’s misinformation, huge swaths of confused voters and activists aligned more with speculation and emotion/hype than unbiased facts, and/or project themselves as fake commentators. This dirtied the data in terms of the election process and only begs the question – which parts of the election information process are broken? This normalizes petty policy fights, emotional reasoning, lack of unbiased intellectualism – negatively impacting western culture. All to the threat actor’s delight. Increased public-to-private partnerships, more educational rigor, and enhanced privacy protections for election and voter data are needed to combat this disinformation.

3) Identity and Access Management (IAM) Scrutiny Drives Zero Trust Orchestration:

The pandemic and mass resignation/gig economy has pushed most organizations to amass work from home (WFH) posture. Generally, this improves productivity making it likely to become the new norm. Albeit with new rules and controls. To support this, 51% of business leaders started speeding up the deployment of zero trust capabilities in 2020 (Andrew Conway; Microsoft, 08/19/20) and there is no evidence to suggest this is slowing down in 2022 but rather it is likely increasing to support zero trust orchestration.

Orchestration is enhanced automation between partner zero trust applications and data, while leaving next to no blind spots. This reduces risk and increases visibility and infrastructure control in an agile way. The quantified benefit of deploying mature zero trust capabilities including orchestration is on average $ 1.51 million dollars less in breach response costs when compared to an organization who has not rolled out zero trust capabilities (IBM Security; Cost of A Data Breach Report, 2022). 

Fig. 4. Zero Trust Components to Orchestration; Microsoft, 09/17/21

Zero trust moves organizations to a need-to-know-only access mindset with inherent deny rules, all the while assuming you are compromised. This infers single sign-on at the personal device level and improved multifactor authentication. It also infers better role-based access controls (RBAC), firewalled networks, improved need-to-know policies, effective whitelisting and blacking listing of apps, group membership reviews, and state of the art privileged access management (PAM) tools for the next year. In the future more of this is likely to better automate and orchestrate (Fig. 4.) zero trust abilities so that one part does not hinder another part via complexity fog.

4) Security Perimeter is Now More Defined by Data Analytics than Physical/Digital Boundaries:

This increased WFH posture blurs the security perimeter physically and digitally. New IP addresses, internet volume, routing, geolocation, and virtual machines (VMs) exacerbate this blur. This raises the criticality of good data analytics and dashboarding to define the digital boundaries in real time. Therefore, prior audits, security controls, and policies may be ineffective. For instance, empty corporate offices are the physical byproduct of mass WFH, requiring organizations to set default disable for badge access. Extra security in or near server rooms is also required. The pandemic has also made vendor interactions more digital, so digital vendor connection points should be reduced and monitored in real time, and the related exception policies should be re-evaluated.

New data lakes and machine learning informed patterns can better define security perimeter baselines. One example of this includes knowing what percent of your remote workforce is on what internet providers and what type? For example, Google fiber, Comcast cable, CenturyLink DSL, ATT 5G, etc. There are only certain modems that can go with each of these networks and that leaves a data trail. Of course, it could be any type of router. What type of device do they connect with MAC, Apple, VM, or other, and if it is healthy – all can be determined in relation to security perimeter analytics.

5) Cyber Firm Mandiant Was Purchased by Google Spawning Private Sector Security Innovation.

Google completed its acquisition of security and incident response firm Mandiant for $5.4 billion dollars in Sept 2022 (Google Cloud; Thomas Kurian CEO – Google Cloud, 09/12/22). This acquisition positions the search and advertising leader with better cloud security infrastructure, better market appeal, and more diversification. With a more advanced and integrated security foundation, Google Cloud can compete better against market leader Amazon Web Services (AWS) and runner-up Microsoft Azure. They will do this on more than price because features will likely grow to leverage their differentiating machine learning and analytical abilities via clients throughout the industry.

Other benefits of integrating Mandiant include improved automated breach response logic. This is because security teams can now gather the required data and then share it across Google customers to help analyze ransomware threat variants. Many of Google’s security related products will also be enhanced by Mandiant’s threat intelligence and incident response capabilities. Some of these products include Google’s security orchestration, automation and response (SOAR) tool which is described this way, “Part of Chronicle Security Operations, Chronicle SOAR enables modern, fast and effective response to cyber threats by combining playbook automation, case management and integrated threat intelligence in one cloud-native, intuitive experience” (Google; Google Cloud, 01/16/23).

According to Dave Cundiff, CISO at Cyvatar, “if Google, as one of the leaders in data science, can progress and move forward the ability to prevent the unknown vectors of attack before they happen based upon the mountains of data available from previous breaches investigated by Mandiant, there could truly be a significant advancement in cybersecurity for its cloud customers” (SC Media; Steve Zurier, 04/15/22). This results in a strong focus on prevention vs. response, which is greatly needed. Lastly, since AWS and Microsoft will be unlikely to hire Mandiant directly because Google owns them, they will likely look to acquire another security services player soon.

6) Data Breaches Have Increased in Number and Cost but Are Generally Identified Faster.

The pandemic has continued to be a part of the catalyst for increased lawlessness including fraud, ransomware, data theft, and other types of profitable hacking. Cybercriminals are more aggressively taking advantage of geopolitical conflict and legal standing gaps. For example, almost all hacking operations are in countries that do not have friendly geopolitical relations with the United States or its allies – and all their many proxy hops would stay consistent with this. These proxy hops are how they hide their true location and identity.

Moreover, with local police departments extremely overworked and understaffed with their number one priority being responding to the huge uptick in violent crime in most major cities, white-collar cybercrimes remain a low priority. Additionally, local police departments have few cyber response capabilities depending on the size of their precinct. Often, they must sheepishly defer to the FBI, CISA, and the Secret Service, or their delegates for help. Yet not unsurprisingly, there is a backlog for that as well with preference going to large companies of national concern that fall clearly into one of the 16 critical infrastructures. That is if turf fights and bureaucratic roadblocks don’t make things worse. Thus, many mid and small-sized businesses are left in the cold to fend for themselves which often results in them paying ransomware, and then being a victim a second time all the while their insurance carrier denes their claims, raises their rate, and/or drops them.

Further complicating this is lack of clarity on data breach and business interruption insurance coverage and terms. Keep in mind most general business liability insurance policies and terms were drafted before hacking was invented so they are by default behind the technology. Most often general liability business insurance covers bodily injuries and property damage resulting from your products, services, or operations. Please see my related article “10 Things IT Executives Must Know About Cyber Insurance” to understand incident response and to reduce the risk of inadequate coverage and/or claims denials.

Data breaches are more expensive than ever. IBM’s 2022 Annual Cost of a Date Breach Report revealed increased costs associated with the average data breach at an estimated $4.35 million per organization. This is a $110,000 year-over-year increase at 2.6% and the highest in the reports history (Fig. 5). However, the average time to identify and contain a data breach decreased both decreased by 5 days (Fig 6). This is a total decrease of 10 days or 3.5%. Yet this is for general data breaches and not ransomware attacks.

Fig 5. Cost of A Data Breach Increases 2021 to 2022 (IBM Security, 2022).
Fig. 6. Average Time To Identify and Contain a Data Breaches Decreases 2021 to 2022, (IBM Security, 2022).

Lastly, this is a lot of money for an organization to spend on a breach. Yet this amount could be higher when you factor in other long-term consequence costs such as increased risk of a second breach, brand damage, and/or delayed regulatory penalties that were below the surface – all of which differs by industry. In sum, it is cheaper and more risk prudent to spend even $4.35 million or a relative percentage at your organization on preventative zero trust capabilities than to deal with the cluster of a data breach.

7) The Costa Rican Government was Heavily Hacked and Encrypted by the Conti Ransomware.

The Costa Rican Government was forced to declare a national emergency on 05/08/22 because the Conti Ransomware intrusion had extended to most of its governmental entities. Conti is an advanced and persistent ransomware as a service attack platform. The attackers are believed to the Russian cybercrime gang Wizard Spider (Associated Press; NBC News, 06/17/22). “The threat actor entry point was a system belonging to Costa Rica’s Ministry of Finance, to which a member of the group referred to as ‘MemberX’ gained access over a VPN connection using compromised credentials” (Bleeping Computer; Ionut Ilascu, 07/21/22). Phishing is a common way to get in to monitor for said credentials but in this case it was done “Using the Mimikatz post-exploitation tool for exfiltrating credentials, the adversary collected the logon passwords and NTDS hashes for the local users, thus getting “plaintext and bruteable local admin, domain and enterprise administrator hashes” (Bleeping Computer; Ionut Ilascu, 07/21/22).

Fig. 7. Costa Rica Conti Ransomware Attack Architecture; AdvIntel via (Bleeping Computer; Ionut Ilascu, 07/21/22).

This resulted in 672GB of data leaked and dumped or 97% of what was stolen (Bleeping Computer; Ionut Ilascu, 07/21/22). Some believe Costa Rica was targeted because they supported Ukraine against Russia. This highlights the need for smaller countries to better partner with private infrastructure providers and to test for worst-case scenarios.

Take-Aways:

The pandemic remains a catalyst for digital transformation in tech automation, IAM, big data, collaboration tools, and AI. We no longer have the same office and thus less badge access is needed. The growth and acceptability of mass WFH combined with the mass resignation/gig economy remind employers that great pay and culture alone are not enough to keep top talent. Signing bonuses and personalized treatment are likely needed. Single sign-on (SSO) will expand to personal devices and smartphones/watches. Geolocation-based authentication is here to stay with double biometrics likely. The security perimeter is now more defined by data analytics than physical/digital boundaries, and we should dashboard this with machine learning and AI tools.

Education and awareness around the review and removal of non-essential mobile apps is a top priority. Especially for mobile devices used separately or jointly for work purposes. This requires a better understanding of geolocation, QR code scanning, couponing, digital signage, in-text ads, micropayments, Bluetooth, geofencing, e-readers, HTML5, etc. A bring your own device (BYOD) policy needs to be written, followed, and updated often informed by need-to-know and role-based access (RBAC) principles. Organizations should consider forming a mobile ecosystem security committee to make sure this unique risk is not overlooked or overly merged with traditional web/IT risk. Mapping the mobile ecosystem components in detail is a must.

IT and security professionals need to realize that alleviating disinformation is about security before politics. We should not be afraid to talk about it because if we are then our organizations will stay weak and insecure and we will be plied by the same political bias that we fear confronting. As security professionals, we are patriots and defenders of wherever we live and work. We need to know what our social media baseline is across platforms. More social media training is needed as many security professionals still think it is mostly an external marketing thing. Public-to-private partnerships need to improve and app to app permissions need to be scrutinized. Enhanced privacy protections for election and voter data are needed. Everyone does not need to be a journalist, but everyone can have the common sense to identify malware-inspired fake news. We must report undue bias in big tech from an IT, compliance, media, and a security perspective.

Cloud infra will continue to grow fast creating perimeter and compliance complexity/fog. Organizations should preconfigure cloud-scale options and spend more on cloud-trained staff. They should also make sure that they are selecting more than two or three cloud providers, all separate from one another. This helps staff get cross-trained on different cloud platforms and add-ons. It also mitigates risk and makes vendors bid more competitively. 

In regard to cryptocurrency, NFTs, ICOs, and related exchanges – watch out for scammers who make big claims without details, white papers, filings, or explanations at all. No matter what the investment, find out how it works and ask questions about where your money is going. Honest investment managers or advisors want to share that information and will back it up with details in many documents and filings (FTC).

Moreover, better blacklisting by crypto exchanges and banks is needed to stop these illicit transactions erroring on the side of compliance, and it requires us to pay more attention to knowing and monitoring our own social media baselines. If you are for and use crypto mixer and/or splitter services then you run the risk of having your digital assets mixed with dirty digital assets, you have extortion high fees, you have zero customer service, no regulatory protection, no decent Terms of Service and/or Privacy Policy if any, and you have no guarantee that it will even work the way you think it will.

About the Author:

Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. Over 17 years he has held progressive roles at many banks, insurance companies, retailers, healthcare orgs, and even governments including being a member of the Federal Reserve Secure Payment Task Force. Organizations relish in his ability to bridge gaps and flesh out hidden risk management solutions while at the same time improving processes. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. As a futurist, his writings on digital currency, the Target data breach, and Google combining Google + video chat with Google Hangouts video chat have been validated by many. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire.