Navigating the Future of Media, Law, and AI: Reflections on the 2024 Oxford Media Policy Summer Institute

Fig 1. Jeremy Swenson at the 2024 Oxford Media Policy Summer Institute, 2024.

#medialaw #oxford #mediaethics #airegulation #aipolicy #techethics #oversightboard #techrisk. #web3 #blockchain #techcensorship #contentmoderation Oxford Media Policy Summer Institute Centre for Socio-Legal Studies, University of Oxford Faculty of Law, University of Oxford

Minneapolis

The Oxford Media Policy Summer Institute[1], held annually for over twenty-five years in person in Oxford, UK, is a prestigious program that unites leading communications scholars, media lawyers, regulators, human rights activists, technologists, and policymakers from around the globe. As an integral part of Oxford’s Centre for Socio-Legal Studies and the Faculty of Law, specifically through the Program in Comparative Media Law and Policy (PCMLP), the Institute fosters a global and multidisciplinary understanding of the complex relationships between technology, media, and policy. It aims to broaden the pool of talented scholars and practitioners, connect them to elite professionals, facilitate interdisciplinary dialogue, and build a space for future collaborations. With over 40 participants from more than 20 countries, the Institute provides an unparalleled opportunity to engage with diverse experiences and media environments. Its alumni network, comprising leaders in government, corporations, non-profits, and academia, remains vibrant and collaborative long after the program concludes.

Reflecting on my completion of the 2024 Oxford Media Policy Summer Institute, I am struck by the depth of knowledge I gained, particularly in the areas of media, tech and diversity, and AI policy. One of the most enlightening discussions revolved around the EU’s approach to regulating platforms like Facebook, Twitter, and Google. The EU has been at the forefront of creating frameworks that balance the need for free expression with the imperative to curb harmful content. I learned about the evolving regulatory landscape, including the Digital Services Act (DSA)—which addresses content moderation, online targeted advertising, and the configuration of online interfaces and recommender systems; and the Online Safety Bill—which seeks to hold tech giants accountable for the content on their platforms. These discussions highlighted the increasing importance of the “Fifth Estate,” a concept coined by William H. Dutton, referring to the networked individuals who, through the Internet, are empowering themselves in ways that challenge the control of information by traditional institutions.[2] The EU’s policies aim to regulate this new power dynamic while protecting vulnerable users and ensuring transparency and accountability.

Fig. 2. The 2024 Cohort of the Oxford Media Policy Summer Institute, 2024.

The Institute also provided invaluable insights into AI types, elections, and content moderation in the Global South. The discussions on the Global South’s technological maturity and policy governance revealed significant gaps in infrastructure, regulation, and policy. These challenges are evident in cases of internet censorship and shutdowns during political unrest, as well as instances of election manipulation. However, I also learned about innovative approaches being developed across the continent, which could serve as models for other regions. One such approach is a proposed third-wave model of tech governance that emphasizes local context, community involvement, and adaptive regulation.[3] This model would be more responsive to the unique challenges faced by countries in the Global South, including the need to balance development goals with the protection of human rights, ensuring they are not overpowered by the tech giants, which are primarily U.S.-based. This new model aligns with the idea of the Fifth Estate, as it seeks to empower local communities and their digital influence.

A particularly compelling aspect of the Institute was the examination of Meta’s Oversight Board and its role in protecting human rights amid global tech acceleration.[4] The Oversight Board represents a novel approach to content moderation, offering a degree of independence and transparency that is rare among tech companies. However, the discussions also highlighted the challenges the Board faces, including its limited jurisdiction and the broader question of how to ensure that human rights are upheld in an era of rapid technological change. Then there is the question of if it’s funded by Meta how can it be truly independent?

The need for stronger international frameworks and greater cooperation among stakeholders was a recurring theme, underscoring the importance of global collaboration in addressing these challenges. The Fifth Estate plays a critical role here as well, as the collective influence of networked individuals and organizations can push for greater accountability and human rights protections in the digital age.

Fig. 3. One of many group discussions, 2024.

The issue of foreign information manipulation, particularly disinformation campaigns designed to interfere with elections, was another critical topic. The example of Russia’s interference in U.S. and Ukrainian elections served as a stark reminder of the power of disinformation in destabilizing democracies.[5] The discussions at the Institute underscored the need for robust strategies to counter such threats, including better coordination between governments, tech companies, and civil society. Cybersecurity emerged as a key area of focus, particularly in ensuring the integrity of information in an age where AI is increasingly used to create and spread false narratives.

The role of the U.S. Federal Communications Commission (FCC) in shaping the future of AI and media policy was also a major point of discussion.[6] I gained a deeper understanding of the FCC’s mandate, particularly its focus on ensuring fair competition, protecting consumers, and promoting innovation. The FCC’s approach to AI reflects cautious optimism, recognizing the potential benefits of AI while also acknowledging the need for regulation to prevent abuses. The discussions highlighted the importance of balancing innovation with the need to protect the public from potential harms, particularly in areas such as privacy and data security.

Finally, the Institute emphasized the critical role of cybersecurity in maintaining information trust, especially against the backdrop of emerging AI technologies, which I detailed in my presentation (Fig 4). This included an overview of both the new NIST Cyber Security Framework (CSF) 2.0, which includes governance, and the NIST AI Risk Management Framework (RMF)—its lifecycle swim lanes with a description of the inputs and outputs. As AI becomes more sophisticated, the potential for malicious use grows, making cybersecurity a vital component of any strategy to protect information integrity. The discussions reinforced the idea that cybersecurity must be integrated into all aspects of tech policy, from content moderation to data protection, to ensure that AI is used responsibly.

Fig 4. Jeremy Swenson Presenting Eight Artificial Intelligence (AI) Cyber-Tech Observations, 2024.

In conclusion, my experience at the 2024 Oxford Media Policy Summer Institute was truly impactful. It underscored the significance of inclusivity, collaborative technological innovation, and the vital role of private sector competition in advancing progress. The recurring focus on the growth of the Global South’s tech economy emphasized the need for adaptable and locally tailored regulatory frameworks. As AI continues to develop, the urgency for comprehensive regulation and risk management frameworks is becoming increasingly evident. However, in many areas, it is still too early for definitive solutions, highlighting the necessity for ongoing research and learning.

There is a clear need for independent entities to provide checks and balances on big tech, with the Facebook Oversight Board serving as a promising start, though much more remains to be done. The strength and independence of journalism and free speech are undermined if they are weakened by misinformed platforms or overreaching governments. Network shutdowns and censorship should be rare, thoroughly justified, and subject to transparent auditing. The Institute has provided me with knowledge of the key stakeholders and their dependencies and levels of regulation. Importantly, I obtained key connections across the globe to engage meaningfully in these critical discussions, and I am eager to apply these insights in my future endeavors, be it a tech start-up, writing, or business advisory.

Last but not least, a big thanks to my esteemed fellow classmates this year. I could not have done it so well without all of you; thanks and much respect!

Ashwini Natesan for always correctly offering the Sri Lankan perspective. Martin Fertmann for shedding light on social media oversight. Erik Longo for offering insight on the DSA and related cyber risk. Davor Ljubenkov for the emerging tech and automation insight.Carolyn Khoo for insight on ‘The Korean Wave’. Purevsuren Boldkhuyag for the Asian legal and communication insight. Elena Perotti for the on-point public policy insight. Brandie Lustbader for winning a key legal issue and setting the example of justice and free speech in media. Jan Tancinco for the great insight on video and digital content strategy and innovation with the Prince reference! Thorin Bristow for your great article “Views on AI aren’t binary – they’re plural”. Eirliani Abdul Rahman for your insight on social media and digital AI from many orgs. Hafidz Hakimi ,Ph.D for the Malaysian legal perspective. Vinti Agarwal for the Indian legal view of e-sports/gaming. Numa Dhamani for your insight on AI, tech, and book writing. Bastian Scibbe for your insight on data protection and digital rights. John Okande for the Kenyan perspective on tech governance and policy. Ivana Bjelic Vucinic for the insight on the Global Forum for Media Development (GFMD). Ibrahim Sabra for insight on digital expression and social justice. Mesfin Fikre Woldmariam for the Ethiopian perspective on tech governance and free speech. Katie Mellinger for the FCC knowledge. Margareth Kang for the Brazilian tech public policy insight. Luise Eder for helping organize and lead all of this among a bunch of crafty intellectuals. Nicole Stremlau for leading such a diverse and important agenda at a time when it is so relevant. Thanks to everyone else as well.

About the Author:

Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds a certificate in Media Tech Policy from Oxford University. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

References:


[1] University of Oxford. “Oxford Media Policy Summer Institute”. 2024. https://pcmlp.socleg.ox.ac.uk/oxford-media-policy-summer-institute-2024/

[2] Dutton, William. “The fifth estate: the power shift of the digital age.” Oxford University Press. 2023. https://www.tandfonline.com/doi/full/10.1080/1369118X.2024.2343811

[3] Flew, T., & Lin, F. “The third way of global Internet governance: A dialogue with Terry Flew.” Communication and the Public, 7(3). 2022. https://journals.sagepub.com/doi/full/10.1177/20570473221123150

[4] Meta. “The Oversight Board”. 2024. https://www.oversightboard.com/

[5] Tucker, Eric. “US disrupts Russian government-backed disinformation campaign that relied on AI technology”. AP. 2024. https://apnews.com/article/russia-disinformation-fbi-justice-department-50910729878377c0bf64a916983dbe44

[6] FCC. “The Opportunities and Challenges of Artificial Intelligence for Communications Networks and Consumers.” 2023. https://www.fcc.gov/fcc-nsf-ai-workshop

The Synergy of Art and Technology: Innovation Through Music

Fig. 1. Explore the landscape of AI-Generated Music. Todd S Omohundro, 2024.

Art and technology, though seemingly different realms, have consistently converged to drive groundbreaking innovations. When these two domains intersect, they enhance each other’s potential, creating new pathways for expression, communication, and progress. Music, a quintessential form of art, has particularly benefited from technological advancements, leading to transformative changes in how music is created, distributed, and experienced. This essay explores the importance of the symbiotic relationship between art and technology in music, highlights pioneering musicians who have embraced technology, and outlines the steps to innovation in this fusion, including the significant financial and business impacts of technologies like streaming.

The Convergence of Art and Technology in Music

Music and technology have been intertwined since the earliest days of instrument development. From the invention of the piano to the electric guitar, technological advancements have continually expanded the boundaries of musical expression. In the modern era, digital technology has revolutionized music production, distribution, and consumption.

The importance of this convergence lies in its ability to democratize music creation and distribution. Technology enables musicians to produce high-quality recordings without the need for expensive studio time, distribute their music globally via digital platforms, and interact with their audience in real-time through social media. This democratization has not only increased the diversity of music available but has also given rise to new genres and forms of expression that were previously unimaginable.

Pioneering Musicians in Technology

Several musicians have stood out as pioneers in integrating technology into their art, pushing the boundaries of what is possible in music.

  1. Brian Eno: Often regarded as the godfather of ambient music, Brian Eno’s work in the 1970s with synthesizers and tape machines laid the foundation for electronic music. His innovations in the use of the studio as an instrument and his development of generative music, which uses algorithms to create ever-changing compositions, have had a lasting impact on the music industry.
  2. Björk: Icelandic artist Björk is renowned for her avant-garde approach to music and technology. Her 2011 album “Biophilia” was released as a series of interactive apps, each corresponding to a different track. This innovative format allowed listeners to explore the music through visual and tactile interaction, blending auditory and digital experiences.
  3. Imogen Heap: British musician Imogen Heap has been at the forefront of music technology with her development of the Mi.Mu gloves. These wearable controllers allow musicians to manipulate sound and effects through hand gestures, providing a new way to perform and interact with music.
  4. Prince: Prince was a visionary who seamlessly integrated technology into his music. He was one of the first major artists to sell an album (1997’s “Crystal Ball”) directly to fans via the internet, bypassing traditional distribution channels. Prince’s use of digital recording techniques and electronic instruments in his music, along with his pioneering approach to online music distribution, showcased his forward-thinking approach to the convergence of music and technology.
  5. Billy Corgan: As the frontman of The Smashing Pumpkins, Billy Corgan has been an advocate for technological advancements in music. He embraced the digital recording revolution early on and has continually pushed the boundaries of what can be achieved in the studio. His use of layered guitars and innovative recording techniques has influenced countless artists and producers.

Financial and Business Impacts of Music Technology

The fusion of music and technology has not only transformed artistic expression but has also had significant financial and business impacts. The advent of digital streaming platforms like Spotify, Apple Music, and Tidal has revolutionized the music industry’s economic model.

  1. Revenue Streams: Streaming has created new revenue streams for artists, labels, and tech companies. While physical album sales have declined, the revenue from streaming subscriptions and ad-supported models has surged, offering artists new ways to monetize their work.
  2. Global Reach: Technology has enabled artists to reach global audiences instantly. Musicians can now distribute their music worldwide with a single click, breaking down geographical barriers and allowing for a more diverse and inclusive music industry.
  3. Data Analytics: Streaming platforms provide valuable data analytics to artists and labels, offering insights into listener behavior, preferences, and trends. This information helps musicians make informed decisions about marketing, touring, and production.
  4. Direct-to-Fan Engagement: Social media and other digital tools allow artists to engage directly with their fans, fostering a more personal connection and enabling innovative marketing strategies. Crowdfunding platforms like Kickstarter and Patreon have also emerged, allowing fans to directly support their favorite artists’ projects.

Steps to Innovation in Music Technology

Innovation at the intersection of music and technology follows several key steps:

  1. Identification of a Need or Opportunity: Innovation begins with recognizing a gap or potential for improvement. For instance, the traditional music industry’s limitations in distribution and production led to the development of digital audio workstations (DAWs) and streaming platforms.
  2. Research and Development: This step involves exploring existing technologies and experimenting with new ideas. Musicians like Brian Eno experimented with tape loops and synthesizers to create new sounds, while modern artists might explore artificial intelligence to compose music.
  3. Implementation and Dissemination: Once a viable innovation is developed, it must be implemented and shared with the broader community. Digital platforms like SoundCloud and Bandcamp have been instrumental in distributing new music technologies and innovations.
  4. Feedback and Iteration: Continuous improvement based on feedback is essential. As technology evolves, so too must the tools and methods used by musicians. This iterative process ensures that innovations remain relevant and effective.
  5. Collaboration: Innovation often requires interdisciplinary collaboration. Musicians work with software developers, engineers, and designers to create new instruments, applications, and performance tools. Björk’s “Biophilia” project, for example, involved collaboration with app developers, designers, and scientists.
  6. Prototyping and Testing: Creating prototypes and testing them in real-world scenarios is crucial. Imogen Heap’s development of the Mi.Mu gloves involved numerous iterations and live performance testing to refine the technology.

Conclusion

The fusion of art and technology, particularly in music, has led to profound innovations that have reshaped the landscape of the industry. Pioneering musicians like Brian Eno, Björk, Imogen Heap, Billy Corgan, and Prince have not only expanded the boundaries of musical expression but have also democratized the creation and distribution of music. The integration of technology in music production and distribution has had significant financial and business impacts, revolutionizing revenue streams, global reach, data analytics, and fan engagement. By following a structured approach to innovation, which includes identifying opportunities, research and development, collaboration, prototyping, implementation, and iteration, artists can continue to push the envelope and create transformative experiences. As technology continues to evolve, the potential for new and exciting innovations in music is boundless, promising a future where the synergy of art and technology will continue to inspire and amaze.

About the Author:

Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

Memorial Day: Honoring Sacrifice and Embracing Technological Evolution in Defense

Memorial Day, Stock Image, 2024.

Memorial Day stands as a solemn and profound tribute to the men and women who have laid down their lives in service to the United States. Observed on the last Monday of May, this federal holiday serves not only as a time for reflection and remembrance but also as an opportunity to acknowledge the evolving landscape of defense, particularly the critical role of technology and innovation. As we honor fallen heroes, we also recognize how advancements in technology have transformed military strategies, enhanced national security, and, tragically, brought both triumphs and losses.

The Essence of Memorial Day:

Memorial Day’s origins date back to the post-Civil War era when it was first known as Decoration Day, a time to decorate the graves of fallen soldiers with flowers. Over time, the holiday expanded to honor all American military personnel who died in service. Today, Memorial Day is marked by ceremonies at cemeteries, memorials, and monuments across the nation. The National Moment of Remembrance at 3:00 PM local time encapsulates the spirit of the day, encouraging Americans to pause and reflect on the sacrifices made for their freedoms.

The Intersection of Memorial Day and Technological Innovation:

The defense industry, marked by rapid technological advancements, plays a crucial role in the nation’s security. Memorial Day reminds us not only of the human cost of war but also of the continuous evolution in warfare technology, which has both saved lives and led to new forms of conflict.

One of the most significant technological advancements in modern warfare is the development and deployment of unmanned aerial vehicles (UAVs), commonly known as drones. These drones have revolutionized reconnaissance and targeted strikes, reducing the need for manned missions and thereby decreasing the risk to military personnel. Yet, despite these advancements, the stories of fallen heroes remind us that technology can only do so much to mitigate the inherent dangers of military service.

Examples of Fallen Heroes:

Among the many who have paid the ultimate price, a few stories stand out, embodying courage and sacrifice:

  1. Sergeant First Class Paul R. Smith: During the 2003 invasion of Iraq, Smith’s unit was attacked by a large enemy force. He manned a machine gun on an armored vehicle, providing covering fire and enabling the evacuation of wounded soldiers. Smith’s actions were pivotal in repelling the attack but led to his death. He was awarded the Medal of Honor for his bravery.
  2. Lieutenant Michael P. Murphy: A Navy SEAL, Murphy was posthumously awarded the Medal of Honor for his actions during Operation Red Wings in Afghanistan. His team was ambushed by Taliban forces, and despite being gravely wounded, Murphy exposed himself to enemy fire to call for reinforcements, ultimately saving the lives of his teammates at the cost of his own.
  3. Specialist Vanessa Guillen: Her tragic story highlights not only the dangers faced by service members but also the critical issues within military culture. Guillen was murdered at Fort Hood in 2020, bringing to light significant problems regarding sexual harassment and violence within the military ranks. Her death sparked a movement for better protection and rights for military personnel.

The Role of Technology in Honoring Sacrifice:

As we honor these and countless other fallen heroes, it is important to consider how technology serves their memory and supports current service members. Advanced medical technologies and innovations in prosthetics have significantly improved the quality of life for wounded veterans. Moreover, initiatives like the use of virtual reality (VR) for training purposes help prepare soldiers for the complexities of modern warfare without the immediate risks posed by live combat.

Also, technology plays a crucial role in preserving the legacy of fallen heroes. Digital archives, virtual memorials, and genealogy databases ensure that the stories and sacrifices of military personnel are not forgotten. These resources allow families and future generations to connect with their history and understand the profound impacts of service and sacrifice.

Conclusion:

Memorial Day is a key reminder of the sacrifices made by military personnel throughout American history. As we honor their memory, we also acknowledge the role of technological innovation in shaping modern defense strategies and safeguarding our nation. The stories of fallen heroes like Lieutenant Michael P. Murphy, Sergeant First Class Paul R. Smith, and Specialist Vanessa Guillen exemplify the bravery and dedication of those who serve. Through continued advancements in technology, we strive to reduce the human cost of conflict while ensuring that the legacy of those who have fallen is preserved and honored for generations to come.

About the Author:

Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

Four Key Emerging Considerations with Artificial Intelligence (AI) in Cyber Security

#cryptonews #cyberrisk #techrisk #techinnovation #techyearinreview #infosec #musktwitter #disinformation #cio #ciso #cto #chatgpt #openai #airisk #iam #rbac #artificialintelligence #samaltman #aiethics #nistai #futurereadybusiness #futureofai

By Jeremy Swenson

Fig. 1. Zero Trust Components to Orchestration AI Mashup; Microsoft, 09/17/21; and Swenson, Jeremy, 03/29/24.

1. The Zero-Trust Security Model Becomes More Orchestrated via Artificial Intelligence (AI):

      The zero-trust model represents a paradigm shift in cybersecurity, advocating for the premise that no user or system, irrespective of their position within the corporate network, should be automatically trusted. This approach entails stringent enforcement of access controls and continual verification processes to validate the legitimacy of users and devices. By adopting a need-to-know-only access philosophy, often referred to as the principle of least privilege, organizations operate under the assumption of compromise, necessitating robust security measures at every level.

      Implementing a zero-trust framework involves a comprehensive overhaul of traditional security practices. It entails the adoption of single sign-on functionalities at the individual device level and the enhancement of multifactor authentication protocols. Additionally, it requires the implementation of advanced role-based access controls (RBAC), fortified network firewalls, and the formulation of refined need-to-know policies. Effective application whitelisting and blacklisting mechanisms, along with regular group membership reviews, play pivotal roles in bolstering security posture. Moreover, deploying state-of-the-art privileged access management (PAM) tools, such as CyberArk for password check out and vaulting, enables organizations to enhance toxic combination monitoring and reporting capabilities.

      App-to-app orchestration refers to the process of coordinating and managing interactions between different applications within a software ecosystem to achieve specific business objectives or workflows. It involves the seamless integration and synchronization of multiple applications to automate complex tasks or processes, facilitating efficient data flow and communication between them. Moreover, it aims to streamline and optimize various operational workflows by orchestrating interactions between disparate applications in a cohesive manner. This orchestration process typically involves defining the sequence of actions, dependencies, and data exchanges required to execute a particular task or workflow across multiple applications.

      However, while the concept of zero-trust offers a compelling vision for fortifying cybersecurity, its effective implementation relies on selecting and integrating the right technological components seamlessly within the existing infrastructure stack. This necessitates careful consideration to ensure that these components complement rather than undermine the orchestration of security measures. Nonetheless, there is optimism that the rapid development and deployment of AI-based custom middleware can mitigate potential complexities inherent in orchestrating zero-trust capabilities. Through automation and orchestration, these technologies aim to streamline security operations, ensuring that the pursuit of heightened security does not inadvertently introduce operational bottlenecks or obscure visibility through complexity.

      2. Artificial Intelligence (AI) Powered Threat Detection Has Improved Analytics:

      The utilization of artificial intelligence (AI) is on the rise to bolster threat detection capabilities. Through machine learning algorithms, extensive datasets are scrutinized to discern patterns suggestive of potential security risks. This facilitates swifter and more precise identification of malicious activities. Enhanced with refined machine learning algorithms, security information and event management (SIEM) systems are adept at pinpointing anomalies in network traffic, application logs, and data flow, thereby expediting the identification of potential security incidents for organizations.

      There will be reduced false positives which has been a sustained issue in the past with large overconfident companies repeatedly wasting millions of dollars per year fine tuning useless data security lakes that mostly produce garbage anomaly detection reports [1], [2]. Literally the kind good artificial intelligence (AI) laughs at – we are getting there. All the while, the technology vendors try to solve this via better SIEM functionality for an increased price at present. Yet we expect prices to drop really low as the automation matures.  

      With enhanced natural language processing (NLP) methodologies, artificial intelligence (AI) systems possess the capability to analyze unstructured data originating from various sources such as social media feeds, images, videos, and news articles. This proficiency enables organizations to compile valuable threat intelligence, staying abreast of indicators of compromise (IOCs) and emerging attack strategies. Notable vendors offering such services include Dark Trace, IBM, CrowdStrike, and numerous startups poised to enter the market. The landscape presents ample opportunities for innovation, necessitating the abandonment of past biases. Young, innovative minds well-versed in web 3.0 technologies hold significant value in this domain. Consequently, in the future, more companies are likely to opt for building their tailored threat detection tools, leveraging advancements in AI platform technology, rather than purchasing pre-existing solutions.

      3. Artificial Intelligence (AI) Driven Threat Response Ability Advances:

      Artificial intelligence (AI) isn’t just confined to threat detection; it’s increasingly playing a pivotal role in automating response actions within cybersecurity operations. This encompasses a range of tasks, including the automatic isolation of compromised systems, the blocking of malicious internet protocol (IP) addresses, the adjustment of firewall configurations, and the coordination of responses to cyber incidents—all achieved with greater efficiency and cost-effectiveness. By harnessing AI-driven algorithms, security orchestration, automation, and response (SOAR) platforms empower organizations to analyze and address security incidents swiftly and intelligently.

      SOAR platforms capitalize on AI capabilities to streamline incident response processes, enabling security teams to automate repetitive tasks and promptly react to evolving threats. These platforms leverage AI not only to detect anomalies but also to craft tailored responses, thereby enhancing the overall resilience of cybersecurity infrastructures. Leading examples of such platforms include Microsoft Sentinel, Rapid7 InsightConnect, and FortiSOAR, each exemplifying the fusion of AI-driven automation with comprehensive security orchestration capabilities.

      Microsoft Sentinel, for instance, utilizes AI algorithms to sift through vast volumes of security data, identifying potential threats and anomalies in real-time. It then orchestrates response actions, such as isolating compromised systems or blocking suspicious IP addresses, with precision and speed. Similarly, Rapid7 InsightConnect integrates AI-driven automation to streamline incident response workflows, enabling security teams to mitigate risks more effectively. FortiSOAR, on the other hand, offers a comprehensive suite of AI-powered tools for incident analysis, response automation, and threat intelligence correlation, empowering organizations to proactively defend against cyber threats. Basically, AI tools will help SOAR tools mature so security operations centers (SOCs) can catch the low hanging fruit; thus, they will have more time for analysis of more complex threats. These AI tools will employ the observe, orient, decide, act (OODA) Loop methodology [3]. This will allow them to stay up to date, customized, and informed of many zero-day exploits. At the same time, threat actors will constantly try to avert this with the same AI but with no governance.

        4. Artificial Intelligence (AI) Streamlines Cloud Security Posture Management (CSPM):

        With the escalating migration of organizations to cloud environments, safeguarding the security of cloud assets emerges as a paramount concern. While industry giants like Microsoft, Oracle, and Amazon Web Services (AWS) dominate this landscape with their comprehensive cloud offerings, numerous large organizations opt to establish and maintain their own cloud infrastructures to retain greater control over their data and operations. In response to the evolving security landscape, the adoption of cloud security posture management (CSPM) tools has become imperative for organizations seeking to effectively manage and fortify their cloud environments.

        CSPM tools play a pivotal role in enhancing the security posture of cloud infrastructures by facilitating continuous monitoring of configurations and swiftly identifying any misconfigurations that could potentially expose vulnerabilities. These tools operate by autonomously assessing cloud configurations against established security best practices, ensuring adherence to stringent compliance standards. Key facets of their functionality include the automatic identification of unnecessary open ports and the verification of proper encryption configurations, thereby mitigating the risk of unauthorized access and data breaches. “Keeping data safe in the cloud requires a layered defense that gives organizations clear visibility into the state of their data. This includes enabling organizations to monitor how each storage bucket is configured across all their storage services to ensure their data is not inadvertently exposed to unauthorized applications or users” [4]. This has considerations at both the cloud user and provider level especially considering artificial intelligence (AI) applications can be built and run inside the cloud for a variety of reasons. Importantly, these build designs often use approved plug ins from different vendors making it all the more complex.

        Furthermore, CSPM solutions enable organizations to proactively address security gaps and bolster their resilience against emerging threats in the dynamic cloud landscape. By providing real-time insights into the security status of cloud assets, these tools empower security teams to swiftly remediate vulnerabilities and enforce robust security controls. Additionally, CSPM platforms facilitate comprehensive compliance management by generating detailed reports and audit trails, facilitating adherence to regulatory requirements and industry standards.

        In essence, as organizations navigate the complexities of cloud adoption and seek to safeguard their digital assets, CSPM tools serve as indispensable allies in fortifying cloud security postures. By offering automated monitoring, proactive threat detection, and compliance management capabilities, these solutions empower organizations to embrace the transformative potential of cloud technologies while effectively mitigating associated security risks.

        About the Author:

        Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist / researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

        References:


        [1] Tobin, Donal; “What Challenges Are Hindering the Success of Your Data Lake Initiative?” Integrate.io. 10/05/22: https://www.integrate.io/blog/data-lake-initiative/

        [2] Chuvakin, Anton; “Why Your Security Data Lake Project Will … Well, Actually …” Medium. 10/22/22. https://medium.com/anton-on-security/why-your-security-data-lake-project-will-well-actually-78e0e360c292

        [3] Michael, Katina, Abbas, Roba, and Roussos, George; “AI in Cybersecurity: The Paradox.” IEEE Transactions on Technology and Society. Vol. 4, no. 2: pg. 104-109. 2023: https://ieeexplore.ieee.org/abstract/document/10153442

        [4] Rosencrance, Linda; “How to choose the best cloud security posture management tools.” CSO Online. 10/30/23: https://www.csoonline.com/article/657138/how-to-choose-the-best-cloud-security-posture-management-tools.html

        NIST Cybersecurity Framework (CSF) New Version 2.0 Summary

        Fig. 1. NIST CSF 2.0 Stepper, NIST, 2024.

        #cyberresilience #cybersecurity #generativeai #cyberthreats #enterprisearchitecture #CIO #CTO #riskmanagement #bias #governance #RBAC #CybersecurityFramework #Cybersecurity #NISTCSF #RiskManagement #DigitalResilience #nist #nistframework #cyberawareness

        The National Institute of Standards and Technology (NIST) has updated its widely used Cybersecurity Framework (CSF) — a free respected landmark guidance document for reducing cybersecurity risk. However, it’s important to note that most of the framework core has remained the same. Here are the core components the security community knows:

        Govern (GV): Sets forth the strategic path and guidelines for managing cybersecurity risks, ensuring harmony with business goals and adherence to legal requirements and standards. This is the newest addition which was inferred before but is specifically illustrated to touch every aspect of the framework. It seeks to establish and monitor your company’s cybersecurity risk management strategy, expectations, and policy.

        1.      Identify (ID): Entails cultivating a comprehensive organizational comprehension of managing cybersecurity risks to systems, assets, data, and capabilities.

        2.      Protect (PR): Concentrates on deploying suitable measures to guarantee the provision of vital services.

        3.      Detect (DE): Specifies the actions for recognizing the onset of a cybersecurity incident.

        4.      Respond (RS): Outlines the actions to take in the event of a cybersecurity incident.

        5.      Recover (RC): Focuses on restoring capabilities or services that were impaired due to a cybersecurity incident.

        The new 2.0 edition is structured for all audiences, industry sectors, and organization types, from the smallest startups and nonprofits to the largest corporations and government departments — regardless of their level of cybersecurity preparedness and complexity.

        Fig. 2. NIST CSF 2.0 Function Breakdown, NIST, 2024.

        Here are some key updates:

        Emphasis is placed on the framework’s expanded scope, extending beyond critical infrastructure to encompass all organizations. Importantly, it better incorporates and expands upon supply chain risk management processes. It also introduces a new focus on governance, highlighting cybersecurity as a critical enterprise risk with many dependencies. This is critically important with the emergence of artificial intelligence.

        To make it easier for a wide variety of organizations to implement the CSF 2.0, NIST has developed quick-start guides customized for various audiences, along with case studies showcasing successful implementations, and a searchable catalog of references, all aimed at facilitating the adoption of CSF 2.0 by diverse organizations.

        The CSF 2.0 is aligned with the National Cybersecurity Strategy and includes a suite of resources to adapt to evolving cybersecurity needs, emphasizing a comprehensive approach to managing cybersecurity risk. New adopters can benefit from implementation examples and quick-start guides tailored to specific user types, facilitating easier integration into their cybersecurity practices. The CSF 2.0 Reference Tool simplifies implementation, enabling users to access, search, and export core guidance data in user-friendly and machine-readable formats. A searchable catalog of references allows organizations to cross-reference their actions with the CSF, linking to over 50 other cybersecurity documents – facilitating comprehensive risk management. The Cybersecurity and Privacy Reference Tool (CPRT) contextualizes NIST resources with other popular references, facilitating communication across all levels of an organization.

        NIST aims to continually enhance CSF resources based on community feedback, encouraging users to share their experiences to improve collective understanding and management of cybersecurity risk. The CSF’s international adoption is significant, with translations of previous versions into 13 languages. NIST expects CSF 2.0 to follow suit, further expanding its global reach. NIST’s collaboration with ISO/IEC aligns cybersecurity frameworks internationally, enabling organizations to utilize CSF functions in conjunction with ISO/IEC resources for comprehensive cybersecurity management.

        Resources:

        1. NIST CSF 2.0 Fact Sheet.
        2. NIST CSF 2.0 PDF.
        3. NIST CSF 2.0 Reference Tool.
        4. NIST CSF 2.0 YouTube Breakdown.

        About the Author:

        Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

        Top Pros and Cons of Disruptive Artificial Intelligence (AI) in InfoSec

        Fig. 1. Swenson, Jeremy, Stock; AI and InfoSec Trade-offs. 2024.

        Disruptive technology refers to innovations or advancements that significantly alter the existing market landscape by displacing established technologies, products, or services, often leading to the transformation of entire industries. These innovations introduce novel approaches, functionalities, or business models that challenge traditional practices, creating a substantial impact on how businesses operate (ChatGPT, 2024). Disruptive technologies typically emerge rapidly, offering unique solutions that are more efficient, cost-effective, or user-friendly than their predecessors.

        The disruptive nature of these technologies often leads to a shift in market dynamics, digital cameras or smartphones for example. These with new entrants or previously marginalized players gain prominence while established entities may face challenges in adapting to the transformative changes (ChatGPT, 2024). Examples of disruptive technologies include the advent of the internet, mobile technology, and artificial intelligence (AI), each reshaping industries and societal norms. Here are four of the leading AI tools:

        1.       OpenAI’s GPT:

        OpenAI’s GPT (Generative Pre-trained Transformer) models, including GPT-3 and GPT-2, are predecessors to ChatGPT. These models are known for their large-scale language understanding and generation capabilities. GPT-3, in particular, is one of the most advanced language models, featuring 175 billion parameters.

        2.       Microsoft’s DialoGPT:

        DialoGPT is a conversational AI model developed by Microsoft. It is an extension of the GPT architecture but fine-tuned specifically for engaging in multi-turn conversations. DialoGPT exhibits improved dialogue coherence and contextual understanding, making it a competitor in the chatbot space.

        3.       Facebook’s BlenderBot:

        BlenderBot is a conversational AI model developed by Facebook. It aims to address the challenges of maintaining coherent and contextually relevant conversations. BlenderBot is trained using a diverse range of conversations and exhibits improved performance in generating human-like responses in chat-based interactions.

        4.       Rasa:

        Rasa is an open-source conversational AI platform that focuses on building chatbots and voice assistants. Unlike some other models that are pre-trained on large datasets, Rasa allows developers to train models specific to their use cases and customize the behavior of the chatbot. It is known for its flexibility and control over the conversation flow.

        Here is a list of the pros and cons of AI-based infosec capabilities.

        Pros of AI in InfoSec:

        1. Improved Threat Detection:

        AI enables quicker and more accurate detection of cybersecurity threats by analyzing vast amounts of data in real-time and identifying patterns indicative of malicious activities. Security orchestration, automation, and response (SOAR) platforms leverage AI to analyze and respond to security incidents, allowing security teams to automate routine tasks and respond more rapidly to emerging threats. Microsoft Sentinel, Rapid7 InsightConnect, and FortiSOAR are just a few of the current examples

        2. Behavioral Analysis:

        AI can perform behavioral analysis to identify anomalies in user behavior or network activities, helping detect insider threats or sophisticated attacks that may go unnoticed by traditional security measures. Behavioral biometrics, such as analyzing typing patterns, mouse movements and ram usage, can add an extra layer of security by recognizing the unique behavior of legitimate users. Systems that use AI to analyze user behavior can detect and flag suspicious activity, such as an unauthorized user attempting to access an account or escalate a privilege.

        3. Enhanced Phishing Detection:

        AI algorithms can analyze email patterns and content to identify and block phishing attempts more effectively, reducing the likelihood of successful social engineering attacks.

        4. Automation of Routine Tasks:

        AI can automate repetitive and routine tasks, allowing cybersecurity professionals to focus on more complex issues. This helps enhance efficiency and reduces the risk of human error.

        5. Adaptive Defense Systems:

        AI-powered security systems can adapt to evolving threats by continuously learning and updating their defense mechanisms. This adaptability is crucial in the dynamic landscape of cybersecurity.

        6. Quick Response to Incidents:

        AI facilitates rapid response to security incidents by providing real-time analysis and alerts. This speed is essential in preventing or mitigating the impact of cyberattacks.

        Cons of AI in InfoSec:

        1. Sophistication of Attacks:

        As AI is integrated into cybersecurity defenses, attackers may also leverage AI to create more sophisticated and adaptive threats, leading to a continuous escalation in the complexity of cyberattacks.

        2. Ethical Concerns:

        The use of AI in cybersecurity raises ethical considerations, such as privacy issues, potential misuse of AI for surveillance, and the need for transparency in how AI systems operate.

        3. Cost and Resource Intensive:

        Implementing and maintaining AI-powered security systems can be resource-intensive, both in terms of financial investment and skilled personnel required for development, implementation, and ongoing management.

        4. False Positives and Negatives:

        AI systems are not infallible and may produce false positives (incorrectly flagging normal behavior as malicious) or false negatives (failing to detect actual threats). This poses challenges in maintaining a balance between security and user convenience.

        5. Lack of Human Understanding:

        AI lacks contextual understanding and human intuition, which may result in misinterpretation of certain situations or the inability to recognize subtle indicators of a potential threat. This is where QA and governance come in case something goes wrong.

        6. Dependency on Training Data:

        AI models rely on training data, and if the data used is biased or incomplete, it can lead to biased or inaccurate outcomes. Ensuring diverse and representative training data is crucial to the effectiveness of AI in InfoSec.

        About the author:

        Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist / researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

        Seven Cyber-Tech Observations of 2022 and What it Means for 2023.

        Minneapolis 01/17/23

        cryptonews #cyberrisk #techrisk #techinnovation #techyearinreview #ftxfraud #googlemandiant #infosec #musktwitter #twitterfiles #disinformation #cio #ciso #cto

        By Jeremy Swenson

        Summary:

        Fig. 1. 2022 Cyber Year in Review Mashup; Stock, 2023.

        The pandemic continues to be a big part of the catalyst for digital transformation in tech automation, identity and access management (IAM), big data, collaboration tools, artificial intelligence (AI), and increasingly the supply chain. Disinformation efforts morphed and grew last year with stronger crypto tie ins challenging data and culture; Twitter hype pump and dumps for example. Additionally, cryptocurrency-based money laundering, fraud, and Ponzi schemes increased partly due to weaknesses in the fintech ecosystem around compliance, coin splitting/mixing fog, and IAM complexity. This requires better blacklisting by crypto exchanges and banks to stop these illicit transactions erroring on the side of compliance, and it requires us to pay more attention to knowing and monitoring our own social media baselines.

        The Costa Rican Government was forced to declare a national emergency on 05/08/22 because the Conti Ransomware intrusion had extended to most of its governmental entities. This was a more advanced and persistent ransomware with Russian gang ties (Associated Press; NBC News, 06/17/22). This highlights the need for smaller countries to better partner with private infrastructure providers and to test for worst-case scenarios.

        We no longer have the same office due to mass work from home (WFH) and the mass resignation/gig economy. This infers increased automated zero-trust policies and tools for IAM with less physical badge access required. The security perimeter is now more defined by data analytics than physical/digital boundaries. Education and awareness around the review and removal of non-essential mobile apps grows as a top priority as mobile apps multiply. All the while, data breaches, and ransomware reach an all-time high while costing more to mitigate. Lastly, all these things make the Google acquisition of Mandiant more relevant and plausibly one of the most powerful security analytics and digital investigation entities in the world rivaling nation-state intelligence agencies.

        Intro:

        Every year I like to research and commentate on the most impactful security technology and business happenings from the prior year. This year is unique since crypto money laundering via splitting/mixing, disinformation, the pandemic, and mass resignation/gig economy continue to be a large part of the catalyst for most of these trends. All these trends are likely to significantly impact small businesses, government, education, high-tech, and large enterprise in big and small ways.

        1) The Main Purpose of Cryptocurrency Mixer and/or Splitter Services is Fraud and Money Laundering.

        Cryptocurrency mixer and/or splitter services serve no valid “real-world” ethical business use case considering the relevant fintech and legal options open. Even in the very rare case when you are a refugee fleeing a financially abusive government regime or a terrorist organization is seeking to steal your assets while the national currency is failing, like in Venezuela, which I wrote about in my 2014 article, “Thought$ On The Future of Digital Curren¢y For A Better World” – that is about political revolution and your personal safety more than anything else. Although cases like this give a valid reason why you might want to mix and/or split your crypto assets, that is not fully the same use case we’re talking about here with the recent uptick of ill-intended crypto mixer and/or splitter service use. Therefore, it’s only fair that we discuss the most likely and common use case, which is trending up, and not the few rare edge cases. This use case would be fraud, Ponzi schemes, and money laundering.

        The evidence does not support that a regular crypto exchange is the same thing as a mixer and/or splitter service. For definition’s sake, I am not defining mixing and/or splitting cryptocurrency as the same thing as selling, buying, or converting it – all of this can be done on one or more of the crypto exchanges which is why they are called exchanges. If they are the same or even considerably similar, then why are people and orgs using the mixer and/or splitter services at all? They use them because they offer a considerably different service. Using a mixer and/or splitter service assumes you have gotten some crypto beforehand, from a separate exchange – a step or more before in the daisy chain. This can be done via legal or illegal means. Moreover, why are people paying repeated and hugely excessive fees for these services? The fees are out of line with anything possibly comparable because there is higher compliance and legal risk for the operators of them in that they could get sanctioned like Blender-IO, FTX, Coinbase, Gemini, and others.

        You can still have privacy if that is what you are seeking via a semblance of legal moves such as a trust tied to a separate legal entity, family office entity, converting to real estate, and marriage entity – if you have time to do the paperwork. Legally savvy people have anonymity over their assets often to avoid fraudsters, sales reps, and just privacy for privacy’s sake – but again still not the same use case. Even when people/orgs use these legal instruments for privacy, they still have compliance reporting and tax obligations – some disclosure. Keep in mind some disclosure serves to protect you, that you in fact own the assets you say you own. Using these legal instruments with the right technical security including an encrypted VPN and multifactor authentication serves to sustain privacy, and you will then not need a crypto mixer and/or splitter.

        Yet if you had cryptocurrency and wanted strong privacy to protect your assets, why would you not at least use some of the aforementioned legal instruments or the like? Mostly because any attorney worth anything would be obligated to report this blatant suspected fraud, and would not want to tarnish their name on the filings, etc. Specifically, the attorney would have to see and know where and what entities the crypto was coming from and going to, under what contexts, and that could trigger them to report or refuse to work with them – a fraudster would want to avoid getting detected.

        Specifically, the use of multiple legal entities in different countries in a daisy chain of crypto coin mixing and/or splitting tends to be the pattern for persistent fraud and money laundering. That was the case in the $4.5-billion-dollar crypto theft out of NY (Crocodile of Wall Street), the Blender mixing fraud, and many other cases.

        A recent May 2022 U.S. Treasury press release concerning mixer service money laundering described it this way (Dept of Treasury; Press Release, 05/06/22):

        “Blended.io (Blender) is a virtual currency mixer that operates on the Bitcoin blockchain and indiscriminately facilitates illicit transactions by obfuscating their origin, destination, and counterparties. Blender receives a variety of transactions and mixes them together before transmitting them to their ultimate destinations. While the purported purpose is to increase privacy, mixers like Blender are commonly used by illicit actors. Blender has helped transfer more than $500 million worth of Bitcoin since its creation in 2017. Blender was used in the laundering process for DPRK’s Axie Infinity heist, processing over $20.5 million in illicit proceeds.”

        Fig 2. U.S. Treasury Dept; Blener.io Crypto Mixer Fraud, 2022.

        The question we as a society should be thinking about is tech ethics. What design feature crosses the line to enable fraud too much such that it is not pursued? For example, Silk Road crossed the line, selling illegal drugs, extortion, and other crime. Hacker networks cross the line when they breach companies and steal their credit card data and put it for sale on the dark web. Facebook crossed the line when it enabled bias and undue favor to impact policy outcomes.

        Crypto mixer and/or splitter services (not mere crypto exchanges) are about as close to “money laundering as a service” as it gets – relative to anything else technically available excluding the dark web where there are far worse things available technically. Obviously, the developers, product owners, and project managers behind the crypto mixer and/or splitter services like this are serving the fraud and money laundering use case more than anything else. Some semblance of the organized crime rings is very likely giving them money and direction to this end.

        If you are for and use mixer and/or splitter services then you run the risk of having your digital assets mixed with dirty digital assets, you have extortion high fees, you have zero customer service, no regulatory protection, no decedent Terms of Service and/or Privacy Policy if any, and you have no guarantee that it will even work the way you think it will.

        In fact, you have so much decentralized “so-called” privacy that it could work against you. For example, imagine you pay the high fees to mix and split your crypto multiple times, and then your crypto is stolen by one of the mixing and/or splitting services. This is likely because they know many of their customers are committing fraud and money laundering; yet even if they are not these platforms are associated with that. Therefore, if the platform operators steal their crypto in this process, the victims have little incentive to speak up. Moreover, the mixing and/or splitting service companies have a nice cover to steal it, privacy. They won’t admit that they stole it but will say something like “everything is private and so we can’t see or know but you are responsible for what private assets you have or don’t have”. They will say something like “stealing it is impossible” which of course is a complete lie.

        In sum, what reason do you have to trust a crypto mixing and/or splitting service with your digital assets as outlined above as they are hardly incentivized to protect them or you and operate in the shadows of antiquated non-western fintech regulation. So what really do you get besides likely fraud? What is the business rationale behind using these services as outlined above considering no solid argument or evidence can support it is privacy alone, and what net benefit do you get besides business-enabling money laundering and fraud?

        Now there are valid use cases for crypto and blockchain technology generally and here are five of them:

        1.      Innovative tech removing the central bank for peer-to-peer exchange that is faster and more global, especially helping the underbanked countries.

        2.      Smart contracts can be built on blockchain.

        3.      Blockchain can be used for crowdfunding.

        4.      Blockchain can be used for decentralized storage.

        5.      The traditional cash and coin supply chain is burdensomely wasteful, costly, dirty, and counterfeiting is a real issue. Why do you need to carry ten dollars in quarters or a wad of twenty-dollar bills or even have that be a nation’s economic backing in today’s tech world?

        Here are six tips to identify crypto-related scams:

        1.      With most businesses, it should be easy to find out who the key operators are. If you can’t find out who is running a cryptocurrency or exchange via LinkedIn, Medium, Twitter, a website, or the like be very cautious.

        2.      Whether in cash or cryptocurrency, any business opportunity promising free money is likely to be fake. If it sounds too good to be true it likely is. Multi-level marketing is one old example of this scam.

        3.      Never mix online dating and investment/financial advice. If you meet someone on a dating site or social media app, and then they want to show you how to invest in crypto or they ask you to send them crypto. No matter what sob story and huge return they are claiming it’s a scam (FTC).

        4.      Watch out for scammers who pretend to be celebrities who can multiply any cryptocurrency you send them. If you click on an unexpected link they send or send cryptocurrency to a so-called celebrity’s QR code, that money will go straight to a scammer, and it’ll be gone. Celebrities don’t have time to contact random people on social media, but they are easily impersonated (FTC).

        5.      Celebrities are however used to pump crypto prices via social media, so they get a windfall, and everyone else takes a hit. Watch out for crypto like Dogecoin which is heavily tied to celebrity pumps with no real-world business value. If you are lucky enough to get ahead, get out then.

        6.      Watch out for scammers who make big claims without details, white papers, filings, or explanations at all. No matter what the investment, find out how it works and ask questions about where your money is going. Honest investment managers or advisors want to share that information and will back it up with details in many documents and filings (FTC). 

        2) Disinformation Efforts Are Further Exposed:

        Disinformation has not slowed down any in 2022 due to sustained advancements in communications technologies, the growth of large social media networks, and the “appification” of everything thereby increasing the ease and capability of disinformation. Disinformation is defined as incorrect information intended to mislead or disrupt, especially propaganda issued by a government organization to a rival power or the media. For example, governments creating digital hate mobs to smear key activists or journalists, suppress dissent, undermine political opponents, spread lies, and control public opinion (Shelly Banjo; Bloomberg, 05/18/2019).

        Today’s disinformation war is largely digital via platforms like Facebook, Twitter, Instagram, Reddit, WhatsApp, Yelp, Tik-tok, SMS text messages, and many other lesser-known apps. Yet even state-sponsored and private news organizations are increasingly the weapon of choice, creating a false sense of validity. Undeniably, the battlefield is wherever many followers reside. 

        Bots and botnets are often behind the spread of disinformation, complicating efforts to trace and stop it. Further complicating this phenomenon is the number of app-to-app permissions. For example, the CNN and Twitter apps having permission to post to Facebook and then Facebook having permission to post to WordPress and then WordPress posting to Reddit, or any combination like this. Not only does this make it hard to identify the chain of custody and original source, but it also weakens privacy and security due to the many authentication permissions involved. The copied data is duplicated at each of these layers, which is an additional consideration.

        We all know that false news spreads faster than real news most of the time, largely because it is sensationalized. Since most disinformation draws in viewers which drives clicks and ad revenues; it is a money-making machine. If you can significantly control what’s trending in the news and/or social media, it impacts how many people will believe it. This in turn impacts how many people will act on that belief, good or bad. This is exacerbated when combined with human bias or irrational emotion.

        In 2022 there were many cases of fake crypto initial coin offerings (ICOs) and related scams including the Titanium Blockchain where investors lost at least $21 million (Dept of Justice; Press Release, 07/25/22). The Celsius’ crypto lending platform also came tumbling down largely because it was a social media-hyped Ponzi scheme (CNBC; Arjun Kharpal, 07/08/22). This negatively impacts culture by setting a misguided example of what is acceptable.

        Elon Musk’s controversial purchase of Twitter for $44 billion in October 2022 resulted in a big management shakeup and strategy change (New York Times; Kate Conger and Lauren Hirsch, 10/27/22). The goal was to reduce bias and misinformation in the name of free and fair speech. To this end, the new Twitter under Musk’s direction produced “The Twitter Files” which are a set of internal Twitter, Inc documents made public beginning in December 2022. This was done with the help of independent journalists Matt Taibbi, Bari Weiss, Lee Fang, and authors Michael Shellenberger, David Zweig and Alex Berenson.

        The sixth release of the Twitter Files was on 12/12/22 and revealed (Real Clear Politics; Kalev Leetaru, 12/20/22):

        “Twitter granted great deference to government agencies and select outside organizations. While any Twitter user can report a tweet for removal, officials at the platform provided more direct and expedited channels for select organizations, raising obvious ethical questions about the government’s non-public efforts at censorship. It also captured the degree to which law enforcement requested information – from the physical location of users to foreign influence – from social platforms outside of formal court orders, raising important questions of due process and accountability.”

        Fig. 3. Elon Musk Twitter Freedom of Speech Mash Up; Stock / Getty, 2022.

        With the help of Twitter’s misinformation, huge swaths of confused voters and activists aligned more with speculation and emotion/hype than unbiased facts, and/or project themselves as fake commentators. This dirtied the data in terms of the election process and only begs the question – which parts of the election information process are broken? This normalizes petty policy fights, emotional reasoning, lack of unbiased intellectualism – negatively impacting western culture. All to the threat actor’s delight. Increased public-to-private partnerships, more educational rigor, and enhanced privacy protections for election and voter data are needed to combat this disinformation.

        3) Identity and Access Management (IAM) Scrutiny Drives Zero Trust Orchestration:

        The pandemic and mass resignation/gig economy has pushed most organizations to amass work from home (WFH) posture. Generally, this improves productivity making it likely to become the new norm. Albeit with new rules and controls. To support this, 51% of business leaders started speeding up the deployment of zero trust capabilities in 2020 (Andrew Conway; Microsoft, 08/19/20) and there is no evidence to suggest this is slowing down in 2022 but rather it is likely increasing to support zero trust orchestration.

        Orchestration is enhanced automation between partner zero trust applications and data, while leaving next to no blind spots. This reduces risk and increases visibility and infrastructure control in an agile way. The quantified benefit of deploying mature zero trust capabilities including orchestration is on average $ 1.51 million dollars less in breach response costs when compared to an organization who has not rolled out zero trust capabilities (IBM Security; Cost of A Data Breach Report, 2022). 

        Fig. 4. Zero Trust Components to Orchestration; Microsoft, 09/17/21

        Zero trust moves organizations to a need-to-know-only access mindset with inherent deny rules, all the while assuming you are compromised. This infers single sign-on at the personal device level and improved multifactor authentication. It also infers better role-based access controls (RBAC), firewalled networks, improved need-to-know policies, effective whitelisting and blacking listing of apps, group membership reviews, and state of the art privileged access management (PAM) tools for the next year. In the future more of this is likely to better automate and orchestrate (Fig. 4.) zero trust abilities so that one part does not hinder another part via complexity fog.

        4) Security Perimeter is Now More Defined by Data Analytics than Physical/Digital Boundaries:

        This increased WFH posture blurs the security perimeter physically and digitally. New IP addresses, internet volume, routing, geolocation, and virtual machines (VMs) exacerbate this blur. This raises the criticality of good data analytics and dashboarding to define the digital boundaries in real time. Therefore, prior audits, security controls, and policies may be ineffective. For instance, empty corporate offices are the physical byproduct of mass WFH, requiring organizations to set default disable for badge access. Extra security in or near server rooms is also required. The pandemic has also made vendor interactions more digital, so digital vendor connection points should be reduced and monitored in real time, and the related exception policies should be re-evaluated.

        New data lakes and machine learning informed patterns can better define security perimeter baselines. One example of this includes knowing what percent of your remote workforce is on what internet providers and what type? For example, Google fiber, Comcast cable, CenturyLink DSL, ATT 5G, etc. There are only certain modems that can go with each of these networks and that leaves a data trail. Of course, it could be any type of router. What type of device do they connect with MAC, Apple, VM, or other, and if it is healthy – all can be determined in relation to security perimeter analytics.

        5) Cyber Firm Mandiant Was Purchased by Google Spawning Private Sector Security Innovation.

        Google completed its acquisition of security and incident response firm Mandiant for $5.4 billion dollars in Sept 2022 (Google Cloud; Thomas Kurian CEO – Google Cloud, 09/12/22). This acquisition positions the search and advertising leader with better cloud security infrastructure, better market appeal, and more diversification. With a more advanced and integrated security foundation, Google Cloud can compete better against market leader Amazon Web Services (AWS) and runner-up Microsoft Azure. They will do this on more than price because features will likely grow to leverage their differentiating machine learning and analytical abilities via clients throughout the industry.

        Other benefits of integrating Mandiant include improved automated breach response logic. This is because security teams can now gather the required data and then share it across Google customers to help analyze ransomware threat variants. Many of Google’s security related products will also be enhanced by Mandiant’s threat intelligence and incident response capabilities. Some of these products include Google’s security orchestration, automation and response (SOAR) tool which is described this way, “Part of Chronicle Security Operations, Chronicle SOAR enables modern, fast and effective response to cyber threats by combining playbook automation, case management and integrated threat intelligence in one cloud-native, intuitive experience” (Google; Google Cloud, 01/16/23).

        According to Dave Cundiff, CISO at Cyvatar, “if Google, as one of the leaders in data science, can progress and move forward the ability to prevent the unknown vectors of attack before they happen based upon the mountains of data available from previous breaches investigated by Mandiant, there could truly be a significant advancement in cybersecurity for its cloud customers” (SC Media; Steve Zurier, 04/15/22). This results in a strong focus on prevention vs. response, which is greatly needed. Lastly, since AWS and Microsoft will be unlikely to hire Mandiant directly because Google owns them, they will likely look to acquire another security services player soon.

        6) Data Breaches Have Increased in Number and Cost but Are Generally Identified Faster.

        The pandemic has continued to be a part of the catalyst for increased lawlessness including fraud, ransomware, data theft, and other types of profitable hacking. Cybercriminals are more aggressively taking advantage of geopolitical conflict and legal standing gaps. For example, almost all hacking operations are in countries that do not have friendly geopolitical relations with the United States or its allies – and all their many proxy hops would stay consistent with this. These proxy hops are how they hide their true location and identity.

        Moreover, with local police departments extremely overworked and understaffed with their number one priority being responding to the huge uptick in violent crime in most major cities, white-collar cybercrimes remain a low priority. Additionally, local police departments have few cyber response capabilities depending on the size of their precinct. Often, they must sheepishly defer to the FBI, CISA, and the Secret Service, or their delegates for help. Yet not unsurprisingly, there is a backlog for that as well with preference going to large companies of national concern that fall clearly into one of the 16 critical infrastructures. That is if turf fights and bureaucratic roadblocks don’t make things worse. Thus, many mid and small-sized businesses are left in the cold to fend for themselves which often results in them paying ransomware, and then being a victim a second time all the while their insurance carrier denes their claims, raises their rate, and/or drops them.

        Further complicating this is lack of clarity on data breach and business interruption insurance coverage and terms. Keep in mind most general business liability insurance policies and terms were drafted before hacking was invented so they are by default behind the technology. Most often general liability business insurance covers bodily injuries and property damage resulting from your products, services, or operations. Please see my related article “10 Things IT Executives Must Know About Cyber Insurance” to understand incident response and to reduce the risk of inadequate coverage and/or claims denials.

        Data breaches are more expensive than ever. IBM’s 2022 Annual Cost of a Date Breach Report revealed increased costs associated with the average data breach at an estimated $4.35 million per organization. This is a $110,000 year-over-year increase at 2.6% and the highest in the reports history (Fig. 5). However, the average time to identify and contain a data breach decreased both decreased by 5 days (Fig 6). This is a total decrease of 10 days or 3.5%. Yet this is for general data breaches and not ransomware attacks.

        Fig 5. Cost of A Data Breach Increases 2021 to 2022 (IBM Security, 2022).
        Fig. 6. Average Time To Identify and Contain a Data Breaches Decreases 2021 to 2022, (IBM Security, 2022).

        Lastly, this is a lot of money for an organization to spend on a breach. Yet this amount could be higher when you factor in other long-term consequence costs such as increased risk of a second breach, brand damage, and/or delayed regulatory penalties that were below the surface – all of which differs by industry. In sum, it is cheaper and more risk prudent to spend even $4.35 million or a relative percentage at your organization on preventative zero trust capabilities than to deal with the cluster of a data breach.

        7) The Costa Rican Government was Heavily Hacked and Encrypted by the Conti Ransomware.

        The Costa Rican Government was forced to declare a national emergency on 05/08/22 because the Conti Ransomware intrusion had extended to most of its governmental entities. Conti is an advanced and persistent ransomware as a service attack platform. The attackers are believed to the Russian cybercrime gang Wizard Spider (Associated Press; NBC News, 06/17/22). “The threat actor entry point was a system belonging to Costa Rica’s Ministry of Finance, to which a member of the group referred to as ‘MemberX’ gained access over a VPN connection using compromised credentials” (Bleeping Computer; Ionut Ilascu, 07/21/22). Phishing is a common way to get in to monitor for said credentials but in this case it was done “Using the Mimikatz post-exploitation tool for exfiltrating credentials, the adversary collected the logon passwords and NTDS hashes for the local users, thus getting “plaintext and bruteable local admin, domain and enterprise administrator hashes” (Bleeping Computer; Ionut Ilascu, 07/21/22).

        Fig. 7. Costa Rica Conti Ransomware Attack Architecture; AdvIntel via (Bleeping Computer; Ionut Ilascu, 07/21/22).

        This resulted in 672GB of data leaked and dumped or 97% of what was stolen (Bleeping Computer; Ionut Ilascu, 07/21/22). Some believe Costa Rica was targeted because they supported Ukraine against Russia. This highlights the need for smaller countries to better partner with private infrastructure providers and to test for worst-case scenarios.

        Take-Aways:

        The pandemic remains a catalyst for digital transformation in tech automation, IAM, big data, collaboration tools, and AI. We no longer have the same office and thus less badge access is needed. The growth and acceptability of mass WFH combined with the mass resignation/gig economy remind employers that great pay and culture alone are not enough to keep top talent. Signing bonuses and personalized treatment are likely needed. Single sign-on (SSO) will expand to personal devices and smartphones/watches. Geolocation-based authentication is here to stay with double biometrics likely. The security perimeter is now more defined by data analytics than physical/digital boundaries, and we should dashboard this with machine learning and AI tools.

        Education and awareness around the review and removal of non-essential mobile apps is a top priority. Especially for mobile devices used separately or jointly for work purposes. This requires a better understanding of geolocation, QR code scanning, couponing, digital signage, in-text ads, micropayments, Bluetooth, geofencing, e-readers, HTML5, etc. A bring your own device (BYOD) policy needs to be written, followed, and updated often informed by need-to-know and role-based access (RBAC) principles. Organizations should consider forming a mobile ecosystem security committee to make sure this unique risk is not overlooked or overly merged with traditional web/IT risk. Mapping the mobile ecosystem components in detail is a must.

        IT and security professionals need to realize that alleviating disinformation is about security before politics. We should not be afraid to talk about it because if we are then our organizations will stay weak and insecure and we will be plied by the same political bias that we fear confronting. As security professionals, we are patriots and defenders of wherever we live and work. We need to know what our social media baseline is across platforms. More social media training is needed as many security professionals still think it is mostly an external marketing thing. Public-to-private partnerships need to improve and app to app permissions need to be scrutinized. Enhanced privacy protections for election and voter data are needed. Everyone does not need to be a journalist, but everyone can have the common sense to identify malware-inspired fake news. We must report undue bias in big tech from an IT, compliance, media, and a security perspective.

        Cloud infra will continue to grow fast creating perimeter and compliance complexity/fog. Organizations should preconfigure cloud-scale options and spend more on cloud-trained staff. They should also make sure that they are selecting more than two or three cloud providers, all separate from one another. This helps staff get cross-trained on different cloud platforms and add-ons. It also mitigates risk and makes vendors bid more competitively. 

        In regard to cryptocurrency, NFTs, ICOs, and related exchanges – watch out for scammers who make big claims without details, white papers, filings, or explanations at all. No matter what the investment, find out how it works and ask questions about where your money is going. Honest investment managers or advisors want to share that information and will back it up with details in many documents and filings (FTC).

        Moreover, better blacklisting by crypto exchanges and banks is needed to stop these illicit transactions erroring on the side of compliance, and it requires us to pay more attention to knowing and monitoring our own social media baselines. If you are for and use crypto mixer and/or splitter services then you run the risk of having your digital assets mixed with dirty digital assets, you have extortion high fees, you have zero customer service, no regulatory protection, no decent Terms of Service and/or Privacy Policy if any, and you have no guarantee that it will even work the way you think it will.

        About the Author:

        Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. Over 17 years he has held progressive roles at many banks, insurance companies, retailers, healthcare orgs, and even governments including being a member of the Federal Reserve Secure Payment Task Force. Organizations relish in his ability to bridge gaps and flesh out hidden risk management solutions while at the same time improving processes. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. As a futurist, his writings on digital currency, the Target data breach, and Google combining Google + video chat with Google Hangouts video chat have been validated by many. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire.

        The Main Purpose of Cryptocurrency Mixer and/or Splitter Services is Fraud and Money Laundering.

        Cryptocurrency mixer and/or splitter services serve no valid “real-world” ethical business use case considering the relevant FinTech and legal options open. Even in the very rare case when you are a refugee fleeing a financially abusive government regime or terrorist organization is seeking to steal your assets while the national currency is failing, like in Venezuela which I wrote about in my 2014 article; that is about political revolution and your personal safety more than anything else. Although cases like this give a valid reason why you might want to mix and/or split your crypto assets – that’s not fully the same use case we’re talking about here with the recent uptick of crypto mixer and/or splitter service use. It’s only fair that we discuss the most likely and common use case, which is trending up, and not the few rare edge cases. This use case would be fraud and money laundering.

        The evidence does not support that a regular crypto exchange is the same thing as a mixer and/or splitter service. For definitions sake, I am not defining mixing and/or splitting cryptocurrency as the same thing as selling, buying, or converting it – all of this can be done on one or more of the crypto exchanges which is why they are called exchanges. If they are the same or even considerably similar, then why are people and orgs using the mixer and/or splitter services at all? They use them because they offer a considerably different service. Using a mixer and/or splitter services assumes you have gotten some crypto beforehand, from a separate exchange, a step or more before in the daisy chain. This can be done via legal or illegal means. Moreover, why are they paying repeated and hugely excessive fees for these services? The fees are out of line with anything possibly comparable because there is higher compliance and legal risk for the operators of them in that they could get sanctioned like Blender.IO and others.

        You can still have privacy if that is what you are seeking via a semblance of legal moves such as a trust tied to a separate legal entity, family office entity, converting to real estate, and marriage entity – if you have time to do the paperwork. Legally savvy people have anonymity over their assets often to avoid fraudsters, sales reps, and just privacy for privacy’s sake – but again still not the same use case. Even when people/orgs use these legal instruments for privacy, they still have compliance reporting and tax obligations – I.E., some disclosure. Keep in mind some disclosure serves to protect you that you in fact own the assets you say you own. Using these legal instruments with the right technical security including an encrypted VPN and multifactor authentication serves to sustain privacy, and you will then not need a crypto mixer and/or splitter.

        Yet if you had cryptocurrency and wanted strong privacy to protect your assets, why would you not at least use some of the aforementioned legal instruments or the like? Mostly because any attorney worth anything would be obligated to report this blatant suspected fraud, and would not want to tarnish their name on the filings, etc. Specifically, the attorney would have to see and know where and what entities the crypto was coming from and going to, under what contexts, and that could trigger them to report or refuse to work with them – I.E. a fraudster would want to avoid getting detected.

        Specifically, the use of multiple legal entities in different countries in a daisy chain of crypto coin mixing and/or splitting tends to be the pattern for persistent fraud and money laundering. That was the case in the $4.5-billion-dollar crypto theft out of NY and in Blender mixing fraud, and many other cases.

        A recent U.S. Treasury press release concerning mixer service money laundering described it this way:

        • “Blended.io (Blender) is a virtual currency mixer that operates on the Bitcoin blockchain and indiscriminately facilitates illicit transactions by obfuscating their origin, destination, and counterparties. Blender receives a variety of transactions and mixes them together before transmitting them to their ultimate destinations. While the purported purpose is to increase privacy, mixers like Blender are commonly used by illicit actors. Blender has helped transfer more than $500 million worth of Bitcoin since its creation in 2017. Blender was used in the laundering process for DPRK’s Axie Infinity heist, processing over $20.5 million in illicit proceeds”.
        Fig 1. U.S. Treasury Dept, Blener.io Crypto Mixer Fraud, 2022.

        The question we as a society should be thinking about is tech ethics. What design feature crosses the line to enable fraud too much such that it is not pursued? For example, Silk Road crossed the line, selling illegal drugs, extortion, and other crime. Hacker networks cross the line when they breach companies and steal their credit card data and put it for sale on the dark web. Facebook crossed the line when it enabled bias and undue favor to impact policy outcomes.

        Crypto mixer and/or splitter services (not mere crypto exchanges) are about as close to “money laundering as a service” as it gets – relative to anything else technically available excluding the dark web where there are far worse things available technically. Obviously, the developers, product owners, and project managers behind the crypto mixer and/or splitter services like this are serving the fraud and money laundering use case more than anything else. Some semblance of the organized crime rings is very likely giving them money and direction to this end.

        If you are for and use mixer and/or splitter services then you run the risk of having your digital assets mixed with dirty digital assets, you have extortion high fees, you have zero customer service, no regulatory protection, no decedent Terms of Service and/or Privacy Policy if any, and you have no guarantee that it will even work the way you think it will.

        In fact, you have so much decentralized “so-called” privacy that it could work against you. For example, imagine you pay the high fees to mix and split your crypto multiple times, and then your crypto is stolen by one of the mixing and/or splitting services. This is likely because they know many of their customers are committing fraud and money laundering, yet even if they are not these platforms are associated with that. Therefore, if the platform operators steal their crypto in this process, the victims have little incentive to speak up. Moreover, the mixing and/or splitting service companies have a nice cover to steal it, privacy. They won’t admit that they stole it but will say something like “everything is private and so we can’t see or know but you are responsible for what private assets you have or don’t have”. They will say something like “stealing it is impossible” which is course is a complete lie.

        In sum, what reason do you have to trust a crypto mixing and/or splitting service with your digital assets as outlined above as they are hardly incentivized to protect them or you and operate in the shadows of antiquated non-western fintech regulation. So, what really do you get besides likely fraud? What is the business rationale behind using these services as outlined above considering no solid argument or evidence can support it is privacy alone, and what net benefit do you get besides business-enabling money laundering and fraud?

        Now there are valid use cases for crypto and blockchain generally and here are five of them:

        1. Innovative tech removing the central bank for peer-to-peer exchange that is faster and more global, especially helping the underbanked countries.
        2. Smart contracts can be built on blockchain.
        3. Blockchain can be used for crowdfunding.
        4. Blockchain can be used for decentralized storage.
        5. The traditional cash and coin supply chain is burdensomely wasteful, costly, dirty, and counterfeiting is a real issue. Why do you need to carry ten dollars in quarters or a wad of twenty-dollar bills or even have that be a nation’s economic backing in today’s tech world?

        Here are six tips to identify crypto-related scams:

        1. With most businesses, it should be easy to find out who the key operators are. If you can’t find out who is running a cryptocurrency or exchange via LinkedIn, Medium, Twitter, a website, or the like be very cautious.
        2. Whether in cash or cryptocurrency, any business opportunity promising free money is likely to be fake. If it sounds too good to be true it likely is. Multi-level marketing is one old example of this scam.
        3. Never mix online dating and investment/financial advice. If you meet someone on a dating site or social media app, and then they want to show you how to invest in crypto or they ask you to send them crypto. No matter what sob story and huge return they are claiming it’s a scam (FTC).
        4. Watch out for scammers who pretend to be celebrities who can multiply any cryptocurrency you send them. If you click on an unexpected link they send or send cryptocurrency to a so-called celebrity’s QR code, that money will go straight to a scammer, and it’ll be gone. Celebrities don’t have time to contact random people on social media, but they are easily impersonated (FTC).
        5. Celebrities are however used to pump crypto prices via social media, so they get a windfall, and everyone else takes a hit. Watch out for crypto like Dogecoin which is heavily tied to celebrity pumps with no real-world business value. If you are lucky enough to get ahead, get out then.
        6. Watch out for scammers who make big claims without details, white papers, filings, or explanations at all. No matter what the investment, find out how it works and ask questions about where your money is going. Honest investment managers or advisors want to share that information and will back it up with details in many documents and filings (FTC).

        Jeremy Swenson is a disruptive thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. Over 17 years he has held progressive roles at many banks, insurance companies, retailers, healthcare orgs, and even governments including being a member of the Federal Reserve Secure Payment Task Force. Organizations relish in his ability to bridge gaps and flesh out hidden risk management solutions while at the same time improving processes. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. As a futurist, his writings on digital currency, the Target data breach, and Google combining Google + video chat with Google Hangouts video chat have been validated by many. He holds an MBA from St. Mary’s University of MN, a MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire.

        Five Cyber-Tech Trends of 2021 and What it Means for 2022.

        Minneapolis 01/08/22

        By Jeremy Swenson

        Intro:

        Every year I like to research and commentate on the most impactful security technology and business happenings from the prior year. This year is unique since the pandemic and mass resignation/gig economy continues to be a large part of the catalyst for most of these trends. All these trends are likely to significantly impact small businesses, government, education, high tech, and large enterprise in big and small ways.

        Fig. 1. Facebook Whistle Blower and Disinformation Mashup (Getty & Stock Mashup, 2021).

        Summary:

        The pandemic continues to be a big part of the catalyst for digital transformation in tech automation, identity and access management (IAM), big data, collaboration tools, artificial intelligence (AI), and increasingly the supply chain. Disinformation efforts morphed and grew last year challenging data and culture. This requires us to put more attention on knowing and monitoring our own social media baselines. We no longer have the same office due to mass work from home (WFH) and the mass resignation/gig economy. This infers increased automated zero-trust policies and tools for IAM with less physical badge access required. The security perimeter is now more defined by data analytics than physical/digital boundaries.

        The importance of supply chain cyber security was elevated by the Biden Administration’s Executive Order 1407 in response to hacks including SolarWinds and Colonial Pipeline. Education and awareness around the review and removal of non-essential mobile apps grows as a top priority as mobile apps multiply. All the while, data breaches, and ransomware reach an all-time high while costing more to mitigate.

        1) Disinformation Efforts Accelerate Challenging Data and Culture:

        Disinformation has not slowed down any in 2021 due to sustained advancements in communications technologies, the growth of large social media networks, and the “appification” of everything thereby increasing the ease and capability of disinformation. Disinformation is defined as incorrect information intended to mislead or disrupt, especially propaganda issued by a government organization to a rival power or the media. For example, governments creating digital hate mobs to smear key activists or journalists, suppress dissent, undermine political opponents, spread lies, and control public opinion (Shelly Banjo; Bloomberg, 05/18/2019).

        Today’s disinformation war is largely digital via platforms like Facebook, Twitter, Instagram, Reddit, WhatsApp, Yelp, Tik-tok, SMS text messages, and many other lesser-known apps. Yet even state-sponsored and private news organizations are increasingly the weapon of choice, creating a false sense of validity. Undeniably, the battlefield is wherever many followers reside. 

        Bots and botnets are often behind the spread of disinformation, complicating efforts to trace and stop it. Further complicating this phenomenon is the number of app-to-app permissions. For example, the CNN and Twitter apps having permission to post to Facebook and then Facebook having permission to post to WordPress and then WordPress posting to Reddit, or any combination like this. Not only does this make it hard to identify the chain of custody and original source, but it also weakens privacy and security due to the many authentication permissions involved. The copied data is duplicated at each of these layers which is an additional consideration.

        We all know that false news spreads faster than real news most of the time, largely because it is sensationalized. Since most disinformation draws in viewers which drives clicks and ad revenues; it is a money-making machine. If you can significantly control what’s trending in the news and/or social media, it impacts how many people will believe it. This in turn impacts how many people will act on that belief, good or bad. This is exacerbated when combined with human bias or irrational emotion. For example, in late 2021 there were many cases of fake COVID-19 vaccines being offered in response to human fear (FDA; 09/28/2021). This negatively impacts culture by setting a misguided example of what is acceptable.

        There were several widely reported cases of political disinformation in 2021 including misleading texts, e-mails, mailers, Facebook censorship, and robocalls designed to confuse American voters amid the already stressful pandemic. Like a narcissist’s triangulation trap, these disinformation bursts riled political opponents on both sides in all states creating miscommunication, ad hominin attacks, and even derailed careers with impacts into the future (PBS; The Hinkley Report, 11/24/20 and Daniel Funke; USA Today, 12/23/21).

        Facebook is significantly involved in disinformation as one recent study stated, “Globally, Facebook made the wrong decision for 83 percent of those ads that had not been declared as political by their advertisers and that Facebook or the researchers deemed political. Facebook both overcounted and undercounted political ads in this group” (New York University; Cybersecurity For Democracy, 2021). Of course, Facebook disinformation whistleblower Frances Haugen who testified before Congress in 2021 is only more evidence of these and related Facebook failings. Specifically that “Facebook executives, including CEO Mark Zuckerberg, misstated and omitted key details about what was known about Facebook and Instagram’s ability to cause harm” (Bobby Allyn; NPR, 10/05/21).

        Fig. 2. Facebook Gaps in Ad Transparency (IMEC-DistriNet KU Leuven and NYU Cyber Security for Democracy, 2021).

        With the help of Facebook’s misinformation, huge swaths of confused voters and activists aligned more with speculation and emotion/hype than unbiased facts, and/or project themselves as fake commentators. This dirtied the data in terms of the election process and only begs the question – which parts of the election information process are broken? This normalizes petty policy fights, emotional reasoning, lack of unbiased intellectualism – negatively impacting western culture. All to the threat actor’s delight. Increased public to private partnerships, more educational rigor, and enhanced privacy protections for election and voter data are needed to combat this disinformation.

        2) Identity and Access Management (IAM) Scrutiny Drives Zero Trust Orchestration:

        The pandemic and mass resignation/gig economy has pushed most organizations to amass work from home (WFH) posture. Generally, this improves productivity making it likely to become the new norm. Albeit with new rules and controls. To support this, 51% of business leaders started speeding up the deployment of zero trust capabilities in 2020 (Andrew Conway; Microsoft, 08/19/20) and there is no evidence to suggest this is slowing down in the next year but rather it is likely increasing to support zero trust orchestration. Orchestration is enhanced automation between partner zero trust applications and data, while leaving next to no blind spots. This reduces risk and increases visibility and infrastructure control in an agile way. The quantified benefit of deploying mature zero trust capabilities including orchestration is on average $ 1.76 million dollars less in breach response costs when compared to an organization who has not rolled out zero trust capabilities (IBM Security, Cost of A Data Breach Report, 2021). 

        Fig. 3. Zero Trust Components to Orchestration (Microsoft, 09/17/21).

        Zero trust moves organizations to a need-to-know-only access mindset with inherent deny rules, all the while assuming you are compromised. This infers single sign-on at the personal device level and improved multifactor authentication. It also infers better role-based access controls (RBAC), firewalled networks, improved need-to-know policies, effective whitelisting and blacking listing of apps, group membership reviews, and state of the art PAM (privileged access management) tools for the next year. In the future more of this is likely to better automate and orchestrate (Fig. 3.) zero trust abilities so that one part does not hinder another part via complexity fog.

        3) Security Perimeter is Now More Defined by Data Analytics than Physical/Digital Boundaries:

        This increased WFH posture blurs the security perimeter physically and digitally. New IP addresses, internet volume, routing, geolocation, and virtual machines (VMs) exacerbate this blur. This raises the criticality of good data analytics and dashboarding to define the digital boundaries in real-time. Therefore, prior audits, security controls, and policies may be ineffective. For instance, empty corporate offices are the physical byproduct of mass WFH, requiring organizations to set default disable for badge access. Extra security in or near server rooms is also required. The pandemic has also made vendor interactions more digital, so digital vendor connection points should be reduced and monitored in real-time, and the related exception policies should be re-evaluated.

        New data lakes and machine learning informed patterns can better define security perimeter baselines. One example of this includes knowing what percent of your remote workforce is on what internet providers and what type? For example, Google fiber, Comcast cable, CenturyLink DSL, ATT 5G, etc. There are only certain modems that can go with each of these networks and that leaves a data trail. Of course, it could be any type of router. What type of device do they connect with MAC, Apple, VM, or other, and if it is healthy can all be determined in relationship to security perimeter analytics.

        4) Supply Chain Risk and Attacks Increase Prompting Government Action:

        Every organization has a supply chain big or small. There are even subcomponents of the supply chain that can be hard to see like third/fourth-party vendors. A supply chain attack works by targeting a third/fourth party with access to an organization’s systems instead of hacking their networks directly.

        In 2021 cybercriminals focused their surveillance on key components of the supply chain including hacking DNS servers, switches, routers, VPN concentrators and services, and other supply chain connected components at the vendor level. Of note was the massive Colonial Gas Pipeline hack that spiked fuel prices this last summer. This was caused by one compromised VPN account informed by a leaked password from the dark web (Turton, William; and Mehrotra, Kartikay; Bloomberg, 06/04/21). The SolarWinds hack was another supply chain-originated attack in that they got into SolarWinds IT management product Orien which in turn got them into the networks of most of the customers of that product (Lily Hay Newman; Wired, 12/19/21). The research consensus unsurprisingly ties this attack to Russian affiliated threat actors and there is no evidence contracting that.

        In response to these and related attacks the U.S. Presidential Administration issued Executive Order 14017, the heart of which requires those who manufacture and distribute software a new awareness of their supply chain to include what is in their products, even open-source software (White House; 05/12/21). This in addition to more spending on CISA hiring and public relations efforts for vulnerabilities and NIST framework conformance. Time will tell what this order delivers as it is dependent on what private sector players do.

        Fig. 4. Supply Chain Cyber Attack Diagram (INSURETrust, 2021).

        5) Data Breaches Have Greatly Increased in Number and Cost:

        The pandemic has continued to be a part of the catalyst for increased lawlessness including fraud, ransomware, data theft, and other types of profitable hacking. Cybercriminals are more aggressively taking advantage of geopolitical conflict and legal standing gaps. For example, almost all hacking operations are in countries that do not have friendly geopolitical relations with the United States or its allies – and all their many proxy hops would stay consistent with this. These proxy hops are how they hide their true location and identity.

        Moreover, with local police departments extremely overworked and understaffed with their number one priority being responding to the huge uptick in violent crime in most major cities, white-collar cybercrimes remain a low priority. Additionally, local police departments have few cyber response capabilities depending on the size of their precinct. Often, they must sheepishly defer to the FBI, CISA, and the Secret Service, or their delegates for help. Yet not unsurprisingly, there is a backlog for that as well with preference going to large companies of national concern that fall clearly into one of the 16 critical infrastructures. That is if turf fights and bureaucratic roadblocks don’t make things worse. Thus, many mid and small-sized businesses are left in the cold to fend for themselves which often results in them paying ransomware, and then being a victim a second time all the while their insurance carrier drops them.

        Further complicating this is lack of clarity on data breach and business interruption insurance coverage and terms. Keep in mind most general business liability insurance policies and terms were drafted before hacking was invented so they are by default behind the technology. Most often general liability business insurance covers bodily injuries and property damage resulting from your products, services, or operations. Please see my related article 10 Things IT Executives Must Know About Cyber Insurance to understand incident response and to reduce the risk of inadequate coverage and/or claims denials.

        According to the Identity Theft Resource Center (ITRC)’s 2021Q3 Data Breach Report, there was a 17% year-over increase as of 09/30/21. This means that by the time they finish their Q4 2021 report it’s likely to be above a 30% year-over-year increase. Breaches are also more costly for organizations suffering them according to the IBM Security Cost of Data Breach Report (Fig 5).

        Fig 5. Cost of A Data Breach Increases 2020 to 2021 (IBM Security, 2021).

        From 2020 to 2021 the average cost of a data breach in U.S. dollars rose to $4.24 million from $3.86 million. This is almost a 10% increase at 9.1%. In contrast, the preceding 4 years were relatively flat (Fig 5). The pandemic and policing conundrum is a considerable part of this uptick.

        Lastly, this is a lot of money for an organization to spend on a breach. Yet this amount could be higher when you factor in other long-term consequence costs such as increased risk of a second breach, brand damage, and/or delayed regulatory penalties that were below the surface – all of which differs by industry. In sum, it is cheaper and more risk prudent to spend even $4.24 million or a relative percentage at your organization on preventative zero trust capabilities than to deal with the cluster of a data breach.

        Take-Aways:

        COVID-19 remains a catalyst for digital transformation in tech automation, IAM, big data, collaboration tools, and AI. We no longer have the same office and thus less badge access is needed. The growth and acceptability of mass WFH combined with the mass resignation/gig economy remind employers that great pay and culture alone are not enough to keep top talent. Signing bonuses and personalized treatment are likely needed. Single sign-on (SSO) will expand to personal devices and smartphones/watches. Geolocation-based authentication is here to stay with double biometrics likely. The security perimeter is now more defined by data analytics than physical/digital boundaries, and we should dashboard this with machine learning and AI tools.

        Education and awareness around the review and removal of non-essential mobile apps is a top priority. Especially for mobile devices used separately or jointly for work purposes. This requires a better understanding of geolocation, QR code scanning, couponing, digital signage, in-text ads, micropayments, Bluetooth, geofencing, e-readers, HTML5, etc. A bring your own device (BYOD) policy needs to be written, followed, and updated often informed by need-to-know and role-based access (RBAC) principles. Organizations should consider forming a mobile ecosystem security committee to make sure this unique risk is not overlooked or overly merged with traditional web/IT risk. Mapping the mobile ecosystem components in detail is a must.

        IT and security professionals need to realize that alleviating disinformation is about security before politics. We should not be afraid to talk about it because if we are then our organizations will stay weak and insecure and we will be plied by the same political bias that we fear confronting. As security professionals, we are patriots and defenders of wherever we live and work. We need to know what our social media baseline is across platforms. More social media training is needed as many security professionals still think it is mostly an external marketing thing. Public-to-private partnerships need to improve and app to app permissions need to be scrutinized. Enhanced privacy protections for election and voter data are needed. Everyone does not need to be a journalist, but everyone can have the common sense to identify malware-inspired fake news. We must report undue bias in big tech from an IT, compliance, media, and a security perspective.

        Cloud infra will continue to grow fast creating perimeter and compliance complexity/fog. Organizations should preconfigure cloud-scale options and spend more on cloud-trained staff. They should also make sure that they are selecting more than two or three cloud providers, all separate from one another. This helps staff get cross-trained on different cloud platforms and add-ons. It also mitigates risk and makes vendors bid more competitively. 

        The increase in number and cost of data breaches was in part attributed to vulnerabilities in supply chains in a few national data breach incidents in 2021. Part of this was addressed in President Biden’s Executive Order 1407 on supply chain security. This reminds us to replace outdated routers, switches, repeaters, controllers, and to patch them immediately. It also reminds us to separate and limit network vendor access points to strictly what is needed and for a limited time window. Last but not least, we must have up-to-date thorough business interruption / cyber insurance with detailed knowledge of what it requires for incident response with breach vendors pre-selected.  

        About the Author:

        Jeremy Swenson is a disruptive thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. Over 17 years he has held progressive roles at many banks, insurance companies, retailers, healthcare orgs, and even governments including being a member of the Federal Reserve Secure Payment Task Force. Organizations relish in his ability to bridge gaps and flesh out hidden risk management solutions while at the same time improving processes. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. As a futurist, his writings on digital currency, the Target data breach, and Google combining Google + video chat with Google Hangouts video chat have been validated by many. He holds an MBA from St. Mary’s University of MN, a MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire.

        Three Points on Artificial Intelligence and Cyber-Security for 2017

        icit-new-logo-for-website5
        Although I have been known for longer posts, I would like to offer only three things to watch out for related to artificial intelligence and cyber-security for 2017, followed by sharing two videos.

        1) Cyber attackers have long used machine learning and automation techniques to streamline their operations and may soon use full-blown artificial intelligence to do it. Botnets will become self-healing and will be able to detect when they are being discovered and can re-route in response. The botnet and cyber crime business will grow and become more organized. Showdan, the world’s first search engine for internet connected devices, will be used to target companies and individuals negatively. Yet it can also be used for safety and compliance monitoring, most likely when its feed into another analytical tool.

        How to Hack with Showdan (For Educational Purposes Only):

        2) It won’t be long until A.I. learns the patterns of mutating viruses and then has the ability to predict and/or stop them in their tracks. This is dependent on the most up to date virus definitions, and corresponding algorithms. How a Zero Day is made is heavily a math problem applied to a certain context and operating system. There should be a math formula to predict the next most likely Zero Day exploit – A.I. could provide this. It’s a matter of calculating all possible code various and code add on variations. It’s a lot more advanced than a Rubix Cube.
        975f495fafd8c494591892412ecf87e33) A.I. has the potential to close the gap between the lesser developed world and the developed world. The technology behind A.I. is not limited to big companies like IBM or Microsoft for the long term. We may be surprised with tech start-ups out of the lesser developed world who are very creative. Lack of fiber optic cable connectivity has forced many lesser developed nations to rely heavily on cell tower smartphone based internet communications. This has inspired a mobile app growth wave in parts of Africa as described here; “the use of smartphones and tablets within the country has led to a mobile revolution in Nigeria. Essentially, people now tend to seek mobile solutions more often and thus, enhance the growth of the mobile app development industry” (Top 4 Mobile App development companies in Nigeria, IT News Africa, 2015). A.I. will likely close the gap between these two sectors though not drastically change it. If lesser developed countries can build their own mobile apps and outsource things to A.I.; they could become more independent from the economic constraints of the developed world.

        The below video highlights some of the complications around these points. It is from a conference hosted by the ICIT on April 25, 2016, and I did not attend this. In the video, Donna Dodson (Associate Director, Chief Cybersecurity Advisor and Director, NIST), Mark Kneidinger (Director, Federal Network Resiliency, DHS), Malcolm Harkins (ICIT Fellow – Cylance) and Stan Wisseman (ICIT Fellow – HPE) discuss related concepts and share realistic examples of how these technologies are reshaping the cyber-security landscape.

        ICIT Forum 2016: Artificial Intelligence Enabling Next-Generation Cybersecurity

        If you want to contact me to discuss these concepts click here.