Fig. 1. Digital Horizons Infographic, Jeremy Swenson, 2025.
Minneapolis—
The rapid technological developments of 2024 have established a foundation for significant shifts in artificial intelligence (AI), cybersecurity, digital strategy, and cryptocurrency. Business executives, policy leaders, and tech enthusiasts must pay attention to these key learnings and trends as they navigate the opportunities and challenges of 2025 and beyond. Here are eight insights to keep in mind.
1. AI Alignment with Business Goals:
2024 underscored the importance of aligning AI initiatives with overarching business strategies. Companies that successfully integrated AI into their workflows—particularly in areas like customer service automation, predictive analytics, tech orchestration, and supply chain optimization—reported not only significant productivity gains but also enhanced customer satisfaction. For instance, AI-powered tools allowed firms to anticipate customer needs with remarkable accuracy, leading to a 35% improvement in retention rates. However, misalignment of AI projects often resulted in wasted resources, showcasing the need for thorough planning. To succeed in 2025, organizations must create cross-functional AI task forces and establish KPIs tailored to their unique business objectives.[1]
2. The Rise of Responsible AI:
As AI adoption grows, so does scrutiny over its ethical implications. 2024 saw regulatory frameworks such as the EU’s AI Act and similar policies in Asia gain traction, emphasizing transparency, accountability, and fairness in AI deployments. Companies that proactively implemented explainable AI models—capable of detailing how decisions are made—not only avoided legal risks but also gained consumer trust. Moreover, organizations adopting responsible AI practices observed better team morale, as employees felt more confident about using ethically sound tools. The NIST AI Risk Management Framework is a good start. Leaders in 2025 must view responsible AI as a strategic advantage, embedding ethical considerations into every stage of AI development.[2]
3. Cyber Resilience Becomes Non-Negotiable:
The escalation of sophisticated cyber threats—including AI-driven malware and deepfake fraud—led to a dramatic increase in cybersecurity investments. Many businesses adopted zero-trust models, ensuring that no user or device is trusted by default, even within corporate networks. Product owners must build products with a DevSecOps mindset and must think out misuse cases from many angles. Additionally, the integration of machine learning for anomaly detection enabled real-time identification of threats, reducing breach response times by over 50%. As the cost of cybercrime is projected to exceed $10 trillion globally by 2025, organizations must prioritize cyber resilience through advanced threat intelligence, employee training, and frequent vulnerability assessments. Cyber resilience is no longer a luxury but a fundamental pillar of operational stability.[3]
4. Quantum Readiness Emerges as a Critical Strategy:
Quantum computing made significant strides in 2024, with breakthroughs in error correction and hardware scalability bringing the technology closer to mainstream use. While practical quantum computers remain years away, their potential to break traditional encryption methods has already prompted a cybersecurity rethink. Forward-looking organizations have begun transitioning to quantum-safe cryptographic algorithms, ensuring that their sensitive data remains secure against future quantum attacks. Industries like finance and healthcare—where data sensitivity is paramount—are leading the charge. By adopting a proactive quantum readiness strategy, businesses can mitigate long-term risks and position themselves as leaders in a post-quantum era.[4]
5. The Blockchain Renaissance:
Blockchain technology continued to evolve beyond its cryptocurrency roots in 2024, finding innovative applications in sectors such as logistics, healthcare, and real estate. For example, blockchain’s immutable ledger capabilities enabled unprecedented transparency in supply chains, reducing fraud and enhancing consumer trust. Meanwhile, the tokenization of physical assets, such as real estate and fine art, democratized access to investment opportunities, attracting a broader range of participants. Organizations leveraging blockchain reported reduced operational costs and faster transaction times, proving that the technology’s value extends far beyond speculation. In 2025, businesses must explore blockchain’s potential as a tool for enhancing efficiency and fostering trust.[5]
6. Employee Upskilling for Digital Transformation:
The digital skills gap emerged as a critical bottleneck in 2024, prompting organizations to invest heavily in workforce development. Comprehensive upskilling programs focused on AI literacy, cybersecurity awareness, and digital strategy were launched across industries. Employees equipped with these skills demonstrated greater adaptability and productivity, enabling their organizations to better navigate technological disruptions. Additionally, companies that prioritized learning cultures saw higher retention rates, as employees valued the investment in their professional growth. As digital transformation accelerates, the ability to upskill and reskill the workforce will be a key differentiator for organizations aiming to remain competitive.[6]
7. Convergence of AI and IoT:
The integration of AI and the Internet of Things (IoT) reached new heights in 2024, driving advancements in smart factories, connected healthcare, and autonomous vehicles. AI-enabled IoT devices allowed businesses to predict equipment failures before they occurred, reducing downtime and maintenance costs by up to 20%. In healthcare, AI-powered wearable devices provided real-time insights into patient health, enabling early intervention and personalized treatment plans. The growing adoption of edge computing further enhanced the responsiveness of AI-IoT systems, enabling real-time decision-making at the device level. This convergence is set to redefine operational efficiency and customer experiences in 2025 and beyond.[7]
8. The Decentralized Finance (DeFi) Evolution:
Decentralized Finance (DeFi) continued to mature in 2024, overcoming early criticisms of security vulnerabilities and lack of regulation. Enhanced interoperability between DeFi platforms and traditional financial systems enabled seamless cross-border transactions, attracting institutional investors. Innovations such as decentralized insurance and automated compliance tools further bolstered confidence in the ecosystem. As traditional banks increasingly explore blockchain for settlement and lending services, the line between centralized and decentralized finance is beginning to blur. In 2025, DeFi’s scalability and innovation are poised to challenge the dominance of legacy financial institutions, creating new opportunities for both consumers and businesses.[8]
Looking Ahead:
The intersection of AI, cybersecurity, digital strategy, and cryptocurrency offers unprecedented opportunities for value creation. However, success will hinge on leaders’ ability to navigate complexity, embrace innovation, foster outstanding leadership, and prioritize ethical stewardship. As these trends continue to evolve, businesses must remain agile and forward-thinking.
About the Author:
Jeremy A. Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and seasoned senior management tech risk and digital strategy consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds a certificate in Media Technology from Oxford University’s Media Policy Summer Institute, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota’s Technological Leadership Institute, an MBA from Saint Mary’s University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale, and New Hope Community Police Academy (MN), and the Minneapolis FBI Citizens Academy. You can follow him on LinkedIn and Twitter.
Footnotes:
Smith, J. (2024). “AI’s Business Integration Challenges.” Tech Review.
European Commission. (2024). “AI Act Regulatory Guidelines.” EU Tech Law Journal.
Cybersecurity Ventures. (2024). “The Cost of Cybercrime: Annual Report.”
Quantum Computing Report. (2024). “Quantum Progress and Cryptographic Implications.”
Blockchain Association. (2024). “The Blockchain Beyond Crypto Study.”
World Economic Forum. (2024). “The Future of Work: Digital Upskilling.”
IoT Analytics. (2024). “The AI-IoT Convergence Report.”
DeFi Pulse. (2024). “State of Decentralized Finance.”
Fig. 1. The Fallacy of Corporate Kool-Aid, Jeremy Swenson, 2024.
Minneapolis—
Corporate culture often prides itself on “innovation” and “forward-thinking,” yet more often than not, it’s hindered by bias, malignant egos, and groupthink. Ironically, in organizations claiming to embrace innovation, employees can become immersed in an environment where dissent is discouraged, and adherence to the company’s established perspectives is a prerequisite for professional survival. This “corporate Kool-Aid” fosters an atmosphere where true innovation struggles to survive. For those who genuinely want to innovate, shedding these restrictive mindsets is essential.
The Innovation Blockers: Bias, Malignant Egos, and Groupthink:
Biases are deeply embedded in most corporate structures, forming an invisible barrier that subtly yet persistently stifles new ideas. Whether it’s confirmation bias, where decision-makers favor ideas that reinforce their pre-existing beliefs, or status quo bias, which resists significant change, these biases ensure that only certain perspectives are entertained. When an organization prioritizes only safe, incremental improvements, true breakthrough ideas are abandoned. Biases in corporations thus serve as a gatekeeper against ideas that could lead to substantial innovation, as anything that doesn’t fit within the current framework is dismissed as too risky.
Ego also plays a significant role in corporate stagnation. In large corporations, leaders are often incentivized to maintain their status, limiting the emergence of truly groundbreaking ideas that may disrupt existing hierarchies. Malignant egos—those that view challenges to the status quo as personal affronts—tend to quash any idea that questions their own vision. When ego takes precedence over objective evaluation, promising concepts are often sidelined or dismissed outright, limiting the potential for progress.
Perhaps the most insidious blocker of innovation is groupthink, a phenomenon that thrives in environments where conformity is rewarded. Groupthink arises when employees, out of fear of ostracization or in pursuit of consensus, align their ideas with what they believe to be the dominant perspective. This limits a company’s ability to approach problems creatively. Once groupthink takes hold, organizations become less adaptable, focusing on pleasing internal stakeholders instead of exploring unconventional approaches that could lead to innovation.
The Alternative: Start-Ups and Their Blueprint for Innovation:
Unlike large corporations, small start-ups are known for their nimbleness and freedom from these entrenched mindsets. Start-ups, by necessity, must adopt a creative approach to stand out in a competitive market. Their size allows them to quickly adapt, test, and refine ideas based on real-world feedback. They lack the layers of management and rigid protocols that stifle creativity in corporations, allowing them to pivot and re-imagine solutions as challenges arise.
Start-ups encourage dissent and debate rather than penalizing it, knowing that innovation rarely emerges from echo chambers. In these environments, groupthink is less likely to flourish because diverse, disruptive perspectives are often essential to a start-up’s success. Without the burden of malignant egos dominating decision-making, start-ups can remain focused on solving genuine problems instead of adhering to individual agendas.
Another advantage of start-ups is their natural resistance to the biases that pervade larger corporations. Start-ups often draw talent from diverse backgrounds and ideologies, meaning biases are more likely to be challenged and less likely to dictate outcomes. This environment fosters resilience against the conformity that stifles corporate innovation, creating an ecosystem where unique ideas can grow.
Breaking Free: Encouraging Innovation Outside the Corporate Mindset:
For those within corporate structures who still wish to innovate, breaking free from the influence of corporate Kool-Aid requires courage and a willingness to challenge entrenched perspectives. Start by questioning assumptions and biases, both personal and organizational, and by fostering a culture where dissent and debate are embraced rather than discouraged. Encourage cross-departmental collaboration, and resist the urge to fall in line with the dominant viewpoint. Innovation rarely emerges from comfort zones; it thrives in the challenging, often uncomfortable process of questioning and exploring new perspectives.
To truly innovate, corporations must consider restructuring their approach. They could adopt leaner, start-up-like teams with the flexibility to pursue independent projects. They must create a culture where ideas are judged on merit, not on the ego or position of the proposer.
Conclusion:
Innovation and corporate Kool-Aid are often incompatible. The groupthink, biases, and egos prevalent in large organizations act as barriers to breakthrough thinking, driving companies to favor predictability over exploration. By shedding these restrictive mindsets and looking to the adaptable, challenge-embracing cultures of start-ups, those genuinely committed to innovation can find ways to foster creativity, disruption, and genuine progress. In doing so, they have the potential to reshape not only their organizations but also their industries—proving that sometimes, the best way forward is to spit out the Kool-Aid.
About the Author:
Jeremy A. Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and seasoned senior management tech risk and digital strategy consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds a certificate in Media Technology from Oxford University’s Media Policy Summer Institute, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota’s Technological Leadership Institute, an MBA from Saint Mary’s University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale, and New Hope Community Police Academy (MN), and the Minneapolis FBI Citizens Academy. You can follow him on LinkedIn and Twitter.
Artificial Intelligence (AI) continues to drive massive innovation across industries, reshaping business operations, customer interactions, and cybersecurity landscapes. As AI’s capabilities grow, companies are leveraging key trends to stay competitive and secure. Below are six crucial AI trends transforming businesses today, alongside critical insights on securing AI infrastructure, promoting responsible AI use, and enhancing workforce efficiency in a digital world.
1. Generative AI’s Creative Expansion
Generative AI, known for producing content from text and images to music and 3D models, is expanding its reach into business innovation.[1] AI systems like GPT-4 and DALL·E are being applied across industries to automate creativity, allowing businesses to scale their marketing efforts, design processes, and product innovation.
Business Application: Marketing teams are using generative AI to create personalized, dynamic campaigns across digital platforms. Coca-Cola and Nike, for instance, have employed AI to tailor advertising content to different customer segments, improving engagement and conversion rates. Product designers in industries like fashion and automotive are also using generative models to prototype new designs faster than ever before.
2. AI-Powered Personalization
AI’s ability to analyze vast datasets in real time is driving hyper-personalized experiences for consumers. This trend is especially important in sectors like e-commerce and entertainment, where personalized recommendations significantly impact user engagement and loyalty.
Business Application: Streaming platforms like Netflix and Spotify rely on AI algorithms to provide tailored content recommendations based on users’ preferences, viewing habits, and search history.[2] Retailers like Amazon are also leveraging AI to offer personalized shopping experiences, recommending products based on past purchases and browsing behavior, further boosting customer satisfaction.
3. AI-Driven Automation in Operations
Automation powered by AI is optimizing operations and processes across industries, from manufacturing to customer service. By automating repetitive and manual tasks, businesses are reducing costs, improving efficiency, and reallocating resources to higher-value activities.
Business Application: Tesla and Siemens are implementing AI in robotic process automation (RPA) to streamline production lines and monitor equipment for potential breakdowns. In customer service, AI chatbots and virtual assistants are being used to handle routine inquiries, providing real-time support to customers while freeing human agents to address more complex issues.
4. Securing AI Infrastructure and Development Practices
As AI adoption grows, so does the need for robust security measures to protect AI infrastructure and development processes. AI systems are vulnerable to cyberattacks, data breaches, and unauthorized access, highlighting the importance of securing AI from development to deployment.
Business Application: Organizations are recognizing the importance of securing AI models, data, and networks through multi-layered security frameworks. The U.S. AI Safety Institute Consortium is actively developing guidelines for AI safety and security, including red-teaming and risk management practices, to ensure AI systems are resilient to attacks. DevSecOps needs to be on the front end of this. To address challenges in securing AI, companies are pushing for standardization in AI audits and evaluations, ensuring consistency in security practices across industries.
5. AI in Predictive Analytics and Decision-Making
Predictive analytics, powered by AI, is enabling companies to forecast trends, predict consumer behavior, and make data-driven decisions with greater accuracy. This is particularly valuable in finance, healthcare, and retail, where anticipating demand or market shifts can lead to significant competitive advantages.
Business Application: Financial institutions like JPMorgan Chase are using AI for predictive analytics to evaluate market conditions, identify investment opportunities, and manage risk.[3] Retailers such as Walmart are employing AI to forecast inventory needs, helping to optimize supply chains and reduce waste. Predictive analytics also allows companies to make proactive decisions regarding customer retention and product development.
6. AI for Enhanced Cybersecurity
AI plays an increasingly pivotal role in improving cybersecurity defenses. AI-driven systems are capable of detecting anomalies, identifying potential threats, and responding to attacks in real-time, offering advanced protection for both physical and digital assets.
Business Application: Leading organizations are integrating AI into cybersecurity protocols to automate threat detection and enhance system defenses. IBM’s AI-powered QRadar platform helps companies identify and respond to cyberattacks by analyzing network traffic and detecting unusual activity.[4] AI systems are also improving identity authentication through biometrics, ensuring that only authorized users gain access to sensitive data.
Moreover, businesses are adopting AI governance frameworks to secure their AI infrastructure and ensure ethical deployment. Evaluating risks associated with open- and closed-source AI development allows for transparency and the implementation of tailored security strategies across sectors.
7. Promoting Responsible AI Use and Security Governance
Beyond technical innovation, AI governance and responsible use are paramount to ensure that AI is developed and applied ethically. Promoting responsible AI use means adhering to best practices and security standards to prevent misuse and unintended harm. The NIST AI risk management framework is a good reference for this.[5]
Business Application: Companies are actively developing frameworks that incorporate ethical principles throughout the lifecycle of AI systems. Microsoft and Google are leading initiatives to mitigate bias and ensure transparency in AI algorithms. Governments and private sectors are also collaborating to develop standardized guidelines and security metrics, helping organizations maintain ethical compliance and robust cybersecurity.
8. Enhancing Workforce Efficiency and Skills Development
AI’s role in enhancing workforce efficiency is not limited to automating tasks. AI-driven training and simulations are transforming how organizations develop and retain talent, particularly in cybersecurity, where skilled professionals are in high demand.
Business Application: Companies are investing in AI-driven educational platforms that simulate real-world cybersecurity scenarios, helping employees hone their skills in a dynamic, hands-on environment. These AI-powered platforms allow for personalized learning, adapting to individual skill levels and providing targeted feedback. Additionally, AI is being used to identify skill gaps within teams and recommend tailored training programs, improving workforce readiness for future challenges. Yet, people who are AI capable still need to support these apps and managerial efforts.
Conclusion: AI’s Role in Business and Security Transformation
As AI tools advance rapidly, it’s wise to assume they can access and analyze all publicly available content, including social media posts and articles like this one. While AI can offer valuable insights, organizations must remain vigilant about how these tools interact with one another, ensuring that application-to-application permissions are thoroughly scrutinized. Public-private partnerships, such as InfraGard, need to be strengthened to address these evolving challenges. Not everyone needs to be a journalist, but having the common sense to detect AI- or malware-generated fake news is crucial. It’s equally important to report any AI bias within big tech from perspectives including IT, compliance, media, and security.
Amid the AI hype, organizations should resist the urge to adopt every new tool that comes along. Instead, they should evaluate each AI system or use case based on measurable, real-world outcomes. AI’s rapid evolution is transforming both business operations and cybersecurity practices. Companies that effectively leverage trends like generative AI, predictive analytics, and automation, while prioritizing security and responsible use, will be better positioned to lead in the digital era. Securing AI infrastructure, promoting ethical AI development, and investing in workforce skills are crucial for long-term success.
Cloud infrastructure is another area that will continue to expand quickly, adding complexity to both perimeter security and compliance. Organizations should invest in AI-based cloud solutions and prioritize hiring cloud-trained staff. Diversifying across multiple cloud providers can mitigate risk, promote vendor competition, and ensure employees gain cross-platform expertise.
To navigate this complex landscape, businesses should adopt ethical, innovative, and secure AI strategies. Forming an AI governance committee is essential to managing the unique risks posed by AI, ensuring they aren’t overlooked or mistakenly merged with traditional IT risks. The road ahead holds tremendous potential, and those who proceed with careful consideration and adaptability will lead the way in AI-driven transformation.
About the Author:
Jeremy A. Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and seasoned senior management tech risk and digital strategy consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds a certificate in Media Technology from Oxford University’s Media Policy Summer Institute, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota’s Technological Leadership Institute, an MBA from Saint Mary’s University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale, and New Hope Community Police Academy (MN), and the Minneapolis FBI Citizens Academy. You can follow him on LinkedIn and Twitter.
The Oxford Media Policy Summer Institute[1], held annually for over twenty-five years in person in Oxford, UK, is a prestigious program that unites leading communications scholars, media lawyers, regulators, human rights activists, technologists, and policymakers from around the globe. As an integral part of Oxford’s Centre for Socio-Legal Studies and the Faculty of Law, specifically through the Program in Comparative Media Law and Policy (PCMLP), the Institute fosters a global and multidisciplinary understanding of the complex relationships between technology, media, and policy. It aims to broaden the pool of talented scholars and practitioners, connect them to elite professionals, facilitate interdisciplinary dialogue, and build a space for future collaborations. With over 40 participants from more than 20 countries, the Institute provides an unparalleled opportunity to engage with diverse experiences and media environments. Its alumni network, comprising leaders in government, corporations, non-profits, and academia, remains vibrant and collaborative long after the program concludes.
Reflecting on my completion of the 2024 Oxford Media Policy Summer Institute, I am struck by the depth of knowledge I gained, particularly in the areas of media, tech and diversity, and AI policy. One of the most enlightening discussions revolved around the EU’s approach to regulating platforms like Facebook, Twitter, and Google. The EU has been at the forefront of creating frameworks that balance the need for free expression with the imperative to curb harmful content. I learned about the evolving regulatory landscape, including the Digital Services Act (DSA)—which addresses content moderation, online targeted advertising, and the configuration of online interfaces and recommender systems; and the Online Safety Bill—which seeks to hold tech giants accountable for the content on their platforms. These discussions highlighted the increasing importance of the “Fifth Estate,” a concept coined by William H. Dutton, referring to the networked individuals who, through the Internet, are empowering themselves in ways that challenge the control of information by traditional institutions.[2] The EU’s policies aim to regulate this new power dynamic while protecting vulnerable users and ensuring transparency and accountability.
Fig. 2. The 2024 Cohort of the Oxford Media Policy Summer Institute, 2024.
The Institute also provided invaluable insights into AI types, elections, and content moderation in the Global South. The discussions on the Global South’s technological maturity and policy governance revealed significant gaps in infrastructure, regulation, and policy. These challenges are evident in cases of internet censorship and shutdowns during political unrest, as well as instances of election manipulation. However, I also learned about innovative approaches being developed across the continent, which could serve as models for other regions. One such approach is a proposed third-wave model of tech governance that emphasizes local context, community involvement, and adaptive regulation.[3] This model would be more responsive to the unique challenges faced by countries in the Global South, including the need to balance development goals with the protection of human rights, ensuring they are not overpowered by the tech giants, which are primarily U.S.-based. This new model aligns with the idea of the Fifth Estate, as it seeks to empower local communities and their digital influence.
A particularly compelling aspect of the Institute was the examination of Meta’s Oversight Board and its role in protecting human rights amid global tech acceleration.[4] The Oversight Board represents a novel approach to content moderation, offering a degree of independence and transparency that is rare among tech companies. However, the discussions also highlighted the challenges the Board faces, including its limited jurisdiction and the broader question of how to ensure that human rights are upheld in an era of rapid technological change. Then there is the question of if it’s funded by Meta how can it be truly independent?
The need for stronger international frameworks and greater cooperation among stakeholders was a recurring theme, underscoring the importance of global collaboration in addressing these challenges. The Fifth Estate plays a critical role here as well, as the collective influence of networked individuals and organizations can push for greater accountability and human rights protections in the digital age.
Fig. 3. One of many group discussions, 2024.
The issue of foreign information manipulation, particularly disinformation campaigns designed to interfere with elections, was another critical topic. The example of Russia’s interference in U.S. and Ukrainian elections served as a stark reminder of the power of disinformation in destabilizing democracies.[5] The discussions at the Institute underscored the need for robust strategies to counter such threats, including better coordination between governments, tech companies, and civil society. Cybersecurity emerged as a key area of focus, particularly in ensuring the integrity of information in an age where AI is increasingly used to create and spread false narratives.
The role of the U.S. Federal Communications Commission (FCC) in shaping the future of AI and media policy was also a major point of discussion.[6] I gained a deeper understanding of the FCC’s mandate, particularly its focus on ensuring fair competition, protecting consumers, and promoting innovation. The FCC’s approach to AI reflects cautious optimism, recognizing the potential benefits of AI while also acknowledging the need for regulation to prevent abuses. The discussions highlighted the importance of balancing innovation with the need to protect the public from potential harms, particularly in areas such as privacy and data security.
Finally, the Institute emphasized the critical role of cybersecurity in maintaining information trust, especially against the backdrop of emerging AI technologies, which I detailed in my presentation (Fig 4). This included an overview of both the new NIST Cyber Security Framework (CSF) 2.0, which includes governance, and the NIST AI Risk Management Framework (RMF)—its lifecycle swim lanes with a description of the inputs and outputs. As AI becomes more sophisticated, the potential for malicious use grows, making cybersecurity a vital component of any strategy to protect information integrity. The discussions reinforced the idea that cybersecurity must be integrated into all aspects of tech policy, from content moderation to data protection, to ensure that AI is used responsibly.
In conclusion, my experience at the 2024 Oxford Media Policy Summer Institute was truly impactful. It underscored the significance of inclusivity, collaborative technological innovation, and the vital role of private sector competition in advancing progress. The recurring focus on the growth of the Global South’s tech economy emphasized the need for adaptable and locally tailored regulatory frameworks. As AI continues to develop, the urgency for comprehensive regulation and risk management frameworks is becoming increasingly evident. However, in many areas, it is still too early for definitive solutions, highlighting the necessity for ongoing research and learning.
There is a clear need for independent entities to provide checks and balances on big tech, with the Facebook Oversight Board serving as a promising start, though much more remains to be done. The strength and independence of journalism and free speech are undermined if they are weakened by misinformed platforms or overreaching governments. Network shutdowns and censorship should be rare, thoroughly justified, and subject to transparent auditing. The Institute has provided me with knowledge of the key stakeholders and their dependencies and levels of regulation. Importantly, I obtained key connections across the globe to engage meaningfully in these critical discussions, and I am eager to apply these insights in my future endeavors, be it a tech start-up, writing, or business advisory.
Last but not least, a big thanks to my esteemed fellow classmates this year. I could not have done it so well without all of you; thanks and much respect!
Ashwini Natesan for always correctly offering the Sri Lankan perspective. Martin Fertmann for shedding light on social media oversight. Erik Longo for offering insight on the DSA and related cyber risk. Davor Ljubenkov for the emerging tech and automation insight.Carolyn Khoo for insight on ‘The Korean Wave’. Purevsuren Boldkhuyag for the Asian legal and communication insight. Elena Perotti for the on-point public policy insight. Brandie Lustbader for winning a key legal issue and setting the example of justice and free speech in media. Jan Tancinco for the great insight on video and digital content strategy and innovation with the Prince reference! Thorin Bristow for your great article “Views on AI aren’t binary – they’re plural”. Eirliani Abdul Rahman for your insight on social media and digital AI from many orgs. Hafidz Hakimi ,Ph.D for the Malaysian legal perspective. Vinti Agarwal for the Indian legal view of e-sports/gaming. Numa Dhamani for your insight on AI, tech, and book writing. Bastian Scibbe for your insight on data protection and digital rights. John Okande for the Kenyan perspective on tech governance and policy. Ivana Bjelic Vucinic for the insight on the Global Forum for Media Development (GFMD). Ibrahim Sabra for insight on digital expression and social justice. Mesfin Fikre Woldmariam for the Ethiopian perspective on tech governance and free speech. Katie Mellinger for the FCC knowledge. Margareth Kang for the Brazilian tech public policy insight. Luise Eder for helping organize and lead all of this among a bunch of crafty intellectuals. Nicole Stremlau for leading such a diverse and important agenda at a time when it is so relevant. Thanks to everyone else as well.
About the Author:
Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds a certificate in Media Tech Policy from Oxford University. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.
Fig. 1. Explore the landscape of AI-Generated Music. Todd S Omohundro, 2024.
Art and technology, though seemingly different realms, have consistently converged to drive groundbreaking innovations. When these two domains intersect, they enhance each other’s potential, creating new pathways for expression, communication, and progress. Music, a quintessential form of art, has particularly benefited from technological advancements, leading to transformative changes in how music is created, distributed, and experienced. This essay explores the importance of the symbiotic relationship between art and technology in music, highlights pioneering musicians who have embraced technology, and outlines the steps to innovation in this fusion, including the significant financial and business impacts of technologies like streaming.
The Convergence of Art and Technology in Music
Music and technology have been intertwined since the earliest days of instrument development. From the invention of the piano to the electric guitar, technological advancements have continually expanded the boundaries of musical expression. In the modern era, digital technology has revolutionized music production, distribution, and consumption.
The importance of this convergence lies in its ability to democratize music creation and distribution. Technology enables musicians to produce high-quality recordings without the need for expensive studio time, distribute their music globally via digital platforms, and interact with their audience in real-time through social media. This democratization has not only increased the diversity of music available but has also given rise to new genres and forms of expression that were previously unimaginable.
Pioneering Musicians in Technology
Several musicians have stood out as pioneers in integrating technology into their art, pushing the boundaries of what is possible in music.
Brian Eno: Often regarded as the godfather of ambient music, Brian Eno’s work in the 1970s with synthesizers and tape machines laid the foundation for electronic music. His innovations in the use of the studio as an instrument and his development of generative music, which uses algorithms to create ever-changing compositions, have had a lasting impact on the music industry.
Björk: Icelandic artist Björk is renowned for her avant-garde approach to music and technology. Her 2011 album “Biophilia” was released as a series of interactive apps, each corresponding to a different track. This innovative format allowed listeners to explore the music through visual and tactile interaction, blending auditory and digital experiences.
Imogen Heap: British musician Imogen Heap has been at the forefront of music technology with her development of the Mi.Mu gloves. These wearable controllers allow musicians to manipulate sound and effects through hand gestures, providing a new way to perform and interact with music.
Prince: Prince was a visionary who seamlessly integrated technology into his music. He was one of the first major artists to sell an album (1997’s “Crystal Ball”) directly to fans via the internet, bypassing traditional distribution channels. Prince’s use of digital recording techniques and electronic instruments in his music, along with his pioneering approach to online music distribution, showcased his forward-thinking approach to the convergence of music and technology.
Billy Corgan: As the frontman of The Smashing Pumpkins, Billy Corgan has been an advocate for technological advancements in music. He embraced the digital recording revolution early on and has continually pushed the boundaries of what can be achieved in the studio. His use of layered guitars and innovative recording techniques has influenced countless artists and producers.
Financial and Business Impacts of Music Technology
The fusion of music and technology has not only transformed artistic expression but has also had significant financial and business impacts. The advent of digital streaming platforms like Spotify, Apple Music, and Tidal has revolutionized the music industry’s economic model.
Revenue Streams: Streaming has created new revenue streams for artists, labels, and tech companies. While physical album sales have declined, the revenue from streaming subscriptions and ad-supported models has surged, offering artists new ways to monetize their work.
Global Reach: Technology has enabled artists to reach global audiences instantly. Musicians can now distribute their music worldwide with a single click, breaking down geographical barriers and allowing for a more diverse and inclusive music industry.
Data Analytics: Streaming platforms provide valuable data analytics to artists and labels, offering insights into listener behavior, preferences, and trends. This information helps musicians make informed decisions about marketing, touring, and production.
Direct-to-Fan Engagement: Social media and other digital tools allow artists to engage directly with their fans, fostering a more personal connection and enabling innovative marketing strategies. Crowdfunding platforms like Kickstarter and Patreon have also emerged, allowing fans to directly support their favorite artists’ projects.
Steps to Innovation in Music Technology
Innovation at the intersection of music and technology follows several key steps:
Identification of a Need or Opportunity: Innovation begins with recognizing a gap or potential for improvement. For instance, the traditional music industry’s limitations in distribution and production led to the development of digital audio workstations (DAWs) and streaming platforms.
Research and Development: This step involves exploring existing technologies and experimenting with new ideas. Musicians like Brian Eno experimented with tape loops and synthesizers to create new sounds, while modern artists might explore artificial intelligence to compose music.
Implementation and Dissemination: Once a viable innovation is developed, it must be implemented and shared with the broader community. Digital platforms like SoundCloud and Bandcamp have been instrumental in distributing new music technologies and innovations.
Feedback and Iteration: Continuous improvement based on feedback is essential. As technology evolves, so too must the tools and methods used by musicians. This iterative process ensures that innovations remain relevant and effective.
Collaboration: Innovation often requires interdisciplinary collaboration. Musicians work with software developers, engineers, and designers to create new instruments, applications, and performance tools. Björk’s “Biophilia” project, for example, involved collaboration with app developers, designers, and scientists.
Prototyping and Testing: Creating prototypes and testing them in real-world scenarios is crucial. Imogen Heap’s development of the Mi.Mu gloves involved numerous iterations and live performance testing to refine the technology.
Conclusion
The fusion of art and technology, particularly in music, has led to profound innovations that have reshaped the landscape of the industry. Pioneering musicians like Brian Eno, Björk, Imogen Heap, Billy Corgan, and Prince have not only expanded the boundaries of musical expression but have also democratized the creation and distribution of music. The integration of technology in music production and distribution has had significant financial and business impacts, revolutionizing revenue streams, global reach, data analytics, and fan engagement. By following a structured approach to innovation, which includes identifying opportunities, research and development, collaboration, prototyping, implementation, and iteration, artists can continue to push the envelope and create transformative experiences. As technology continues to evolve, the potential for new and exciting innovations in music is boundless, promising a future where the synergy of art and technology will continue to inspire and amaze.
About the Author:
Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.
Fig. 1. Zero Trust Components to Orchestration AI Mashup; Microsoft, 09/17/21; and Swenson, Jeremy, 03/29/24.
1. The Zero-Trust Security Model Becomes More Orchestrated via Artificial Intelligence (AI):
The zero-trust model represents a paradigm shift in cybersecurity, advocating for the premise that no user or system, irrespective of their position within the corporate network, should be automatically trusted. This approach entails stringent enforcement of access controls and continual verification processes to validate the legitimacy of users and devices. By adopting a need-to-know-only access philosophy, often referred to as the principle of least privilege, organizations operate under the assumption of compromise, necessitating robust security measures at every level.
Implementing a zero-trust framework involves a comprehensive overhaul of traditional security practices. It entails the adoption of single sign-on functionalities at the individual device level and the enhancement of multifactor authentication protocols. Additionally, it requires the implementation of advanced role-based access controls (RBAC), fortified network firewalls, and the formulation of refined need-to-know policies. Effective application whitelisting and blacklisting mechanisms, along with regular group membership reviews, play pivotal roles in bolstering security posture. Moreover, deploying state-of-the-art privileged access management (PAM) tools, such as CyberArk for password check out and vaulting, enables organizations to enhance toxic combination monitoring and reporting capabilities.
App-to-app orchestration refers to the process of coordinating and managing interactions between different applications within a software ecosystem to achieve specific business objectives or workflows. It involves the seamless integration and synchronization of multiple applications to automate complex tasks or processes, facilitating efficient data flow and communication between them. Moreover, it aims to streamline and optimize various operational workflows by orchestrating interactions between disparate applications in a cohesive manner. This orchestration process typically involves defining the sequence of actions, dependencies, and data exchanges required to execute a particular task or workflow across multiple applications.
However, while the concept of zero-trust offers a compelling vision for fortifying cybersecurity, its effective implementation relies on selecting and integrating the right technological components seamlessly within the existing infrastructure stack. This necessitates careful consideration to ensure that these components complement rather than undermine the orchestration of security measures. Nonetheless, there is optimism that the rapid development and deployment of AI-based custom middleware can mitigate potential complexities inherent in orchestrating zero-trust capabilities. Through automation and orchestration, these technologies aim to streamline security operations, ensuring that the pursuit of heightened security does not inadvertently introduce operational bottlenecks or obscure visibility through complexity.
2. Artificial Intelligence (AI) Powered Threat Detection Has Improved Analytics:
The utilization of artificial intelligence (AI) is on the rise to bolster threat detection capabilities. Through machine learning algorithms, extensive datasets are scrutinized to discern patterns suggestive of potential security risks. This facilitates swifter and more precise identification of malicious activities. Enhanced with refined machine learning algorithms, security information and event management (SIEM) systems are adept at pinpointing anomalies in network traffic, application logs, and data flow, thereby expediting the identification of potential security incidents for organizations.
There will be reduced false positives which has been a sustained issue in the past with large overconfident companies repeatedly wasting millions of dollars per year fine tuning useless data security lakes that mostly produce garbage anomaly detection reports [1], [2]. Literally the kind good artificial intelligence (AI) laughs at – we are getting there. All the while, the technology vendors try to solve this via better SIEM functionality for an increased price at present. Yet we expect prices to drop really low as the automation matures.
With enhanced natural language processing (NLP) methodologies, artificial intelligence (AI) systems possess the capability to analyze unstructured data originating from various sources such as social media feeds, images, videos, and news articles. This proficiency enables organizations to compile valuable threat intelligence, staying abreast of indicators of compromise (IOCs) and emerging attack strategies. Notable vendors offering such services include Dark Trace, IBM, CrowdStrike, and numerous startups poised to enter the market. The landscape presents ample opportunities for innovation, necessitating the abandonment of past biases. Young, innovative minds well-versed in web 3.0 technologies hold significant value in this domain. Consequently, in the future, more companies are likely to opt for building their tailored threat detection tools, leveraging advancements in AI platform technology, rather than purchasing pre-existing solutions.
Artificial intelligence (AI) isn’t just confined to threat detection; it’s increasingly playing a pivotal role in automating response actions within cybersecurity operations. This encompasses a range of tasks, including the automatic isolation of compromised systems, the blocking of malicious internet protocol (IP) addresses, the adjustment of firewall configurations, and the coordination of responses to cyber incidents—all achieved with greater efficiency and cost-effectiveness. By harnessing AI-driven algorithms, security orchestration, automation, and response (SOAR) platforms empower organizations to analyze and address security incidents swiftly and intelligently.
SOAR platforms capitalize on AI capabilities to streamline incident response processes, enabling security teams to automate repetitive tasks and promptly react to evolving threats. These platforms leverage AI not only to detect anomalies but also to craft tailored responses, thereby enhancing the overall resilience of cybersecurity infrastructures. Leading examples of such platforms include Microsoft Sentinel, Rapid7 InsightConnect, and FortiSOAR, each exemplifying the fusion of AI-driven automation with comprehensive security orchestration capabilities.
Microsoft Sentinel, for instance, utilizes AI algorithms to sift through vast volumes of security data, identifying potential threats and anomalies in real-time. It then orchestrates response actions, such as isolating compromised systems or blocking suspicious IP addresses, with precision and speed. Similarly, Rapid7 InsightConnect integrates AI-driven automation to streamline incident response workflows, enabling security teams to mitigate risks more effectively. FortiSOAR, on the other hand, offers a comprehensive suite of AI-powered tools for incident analysis, response automation, and threat intelligence correlation, empowering organizations to proactively defend against cyber threats. Basically, AI tools will help SOAR tools mature so security operations centers (SOCs) can catch the low hanging fruit; thus, they will have more time for analysis of more complex threats. These AI tools will employ the observe, orient, decide, act (OODA) Loop methodology [3]. This will allow them to stay up to date, customized, and informed of many zero-day exploits. At the same time, threat actors will constantly try to avert this with the same AI but with no governance.
With the escalating migration of organizations to cloud environments, safeguarding the security of cloud assets emerges as a paramount concern. While industry giants like Microsoft, Oracle, and Amazon Web Services (AWS) dominate this landscape with their comprehensive cloud offerings, numerous large organizations opt to establish and maintain their own cloud infrastructures to retain greater control over their data and operations. In response to the evolving security landscape, the adoption of cloud security posture management (CSPM) tools has become imperative for organizations seeking to effectively manage and fortify their cloud environments.
CSPM tools play a pivotal role in enhancing the security posture of cloud infrastructures by facilitating continuous monitoring of configurations and swiftly identifying any misconfigurations that could potentially expose vulnerabilities. These tools operate by autonomously assessing cloud configurations against established security best practices, ensuring adherence to stringent compliance standards. Key facets of their functionality include the automatic identification of unnecessary open ports and the verification of proper encryption configurations, thereby mitigating the risk of unauthorized access and data breaches. “Keeping data safe in the cloud requires a layered defense that gives organizations clear visibility into the state of their data. This includes enabling organizations to monitor how each storage bucket is configured across all their storage services to ensure their data is not inadvertently exposed to unauthorized applications or users” [4]. This has considerations at both the cloud user and provider level especially considering artificial intelligence (AI) applications can be built and run inside the cloud for a variety of reasons. Importantly, these build designs often use approved plug ins from different vendors making it all the more complex.
Furthermore, CSPM solutions enable organizations to proactively address security gaps and bolster their resilience against emerging threats in the dynamic cloud landscape. By providing real-time insights into the security status of cloud assets, these tools empower security teams to swiftly remediate vulnerabilities and enforce robust security controls. Additionally, CSPM platforms facilitate comprehensive compliance management by generating detailed reports and audit trails, facilitating adherence to regulatory requirements and industry standards.
In essence, as organizations navigate the complexities of cloud adoption and seek to safeguard their digital assets, CSPM tools serve as indispensable allies in fortifying cloud security postures. By offering automated monitoring, proactive threat detection, and compliance management capabilities, these solutions empower organizations to embrace the transformative potential of cloud technologies while effectively mitigating associated security risks.
About the Author:
Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist / researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.
Fig. 1. Quantum ChatGPT Growth Plus NIST AI Risk Management Framework Mashup [1], [2], [3].
Summary:
This year is unique since policy makers and business leaders grew concerned with artificial intelligence (AI) ethics, disinformation morphed, AI had hyper growth including connections to increased crypto money laundering via splitting / mixing. Impressively, AI cyber tools become more capable in the areas of zero-trust orchestration, cloud security posture management (CSPM), threat response via improved machine learning, quantum-safe cryptography ripened, authentication made real time monitoring advancements, while some hype remains. Moreover, the mass resignation / gig economy (remote work) remained a large part of the catalyst for all of these trends.
Introduction:
Every year we like to research and comment on the most impactful security technology and business happenings from the prior year. This year is unique since policy makers and business leaders grew concerned with artificial intelligence (AI) ethics [4], disinformation morphed, AI had hyper growth [5], crypto money laundering via splitting / mixing grew [6], AI cyber tools became more capable – while the mass resignation / gig economy remained a large part of the catalyst for all of these trends. By August 2023 ChatGPT reached 1.43 billion website visits per month and about 180.5 million registered users [7]. This even attracted many non-technical naysayers. Impressively, the platform was only nine months old then and just turned a year old in November [8]. These numbers for AI tools like ChatGPT are going to continue to grow in many sectors at exponential rates. As a result, the below trends and considerations are likely to significantly impact government, education, high-tech, startups, and large enterprises in big and small ways, albeit with some surprises.
1. The Complex Ethics of Artificial Intelligence (AI) Swarms Policy Makers and Industry Resulting in New Frameworks:
The ethical use of artificial intelligence (AI) as a conceptual and increasingly practical dilemma has gained a lot of media attention and research in the last few years by those in philosophy (ethics, privacy), politics (public policy), academia (concepts and principles), and economics (trade policy and patents) – all who have weighed in heavily. As a result, we find this space is beginning to mature. Sovereign nations (The USA, EU, and elsewhere globally) have developed and socialized ethical policies and frameworks [9], [10]. While major corporations motivated by profit are all devising their own ethical vehicles and structures – often taking a legalistic view first [11]. Moreover, The World Economic Forum (WEF) has weighed in on this matter in collaboration with PricewaterhouseCoopers (PWC) [12]. All of this contributes to the accelerated pace of maturity of this area in general. The result is the establishment of shared conceptual viewpoints, early-stage security frameworks, accepted policies, guidelines, and governance structures to support the evolution of artificial intelligence (AI) in ethical ways.
For example, the Department of Defense (DOD) has formally adopted five principles for the ethical development of artificial intelligence capabilities as follows [13]:
Responsible
Equitable
Traceable
Reliable
Governable
Traceable and governable seem to be the most clear and important principles, while equitable and responsible seem gray at best and they could be deemphasized in a heightened war time context. The latter two echo the corporate social responsibility (CSR) efforts found more often in the private sector.
The WEF via PWC has issued its Nine AI Ethical Principles for organizations to follow [14], and The Office of the Director of National Intelligence (ODNI) has released their Framework for AI Ethics [15]. Importantly, The National Institute For Standards in Technology (NIST) has released their AI Risk Management Framework as outlined in Fig. 2. and 3. They also released a playbook to support its implementation and have hosted several working sessions discussing it with industry which we attended virtually [16]. It seems the mapping aspect could take you down many AI rabbit holes, some unforeseen – inferring complex risk. Mapping also impacts how you measure and manage. None of this is fully clear and much of it will change as ethical AI governance matures.
Fig. 3. NIST AI Risk Management Framework: Actors Across AI Lifecycle Stages (AI RMF) 1.0 [18].
The actors in Fig. 3. cover a wide swath of spaces where artificial intelligence (AI) plays, and appropriately so as AI is considered a GPT (general purpose technology) like electricity, rubber, and the like – where it can be applied ubiquitously in our lives [19]. This infers cognitive technology, digital reality, ambient experiences, autonomous vehicles and drones, quantum computing, distributed ledgers, and robotics to name a few. These were all prior to the emergence of generative AI on the scene which will likely put these vehicles to the test much earlier than expected. Yet all of these can be mapped across the AI lifecycle stages in Fig. 3. to clarify the activities, actors, dimensions, and if it gets to build, then more scrutiny will need to be applied.
Scrutiny can come in the form of DevSecOps but that is extremely hard to do with such exponentially massive AI code datasets required by the learning models, at least at this point. Moreover, we are not sure if any AI ethics framework does justice to quality assurance (QA) and secure coding best practices much at this point. However, the above two NIST figures at least clarify relationships, flows, inputs and outputs, but all of this will need to be greatly customized to an organization to have any teeth. We imagine those use cases will come out of future NIST working sessions with industry.
Lastly, the most crucial factor in AI ethics governance is what Fig. 3. calls “People and Planet”. This is because the people and planet can experience the negative aspects of AI in ways the designers did not imagine, and that feedback is valuable to product governance to prevent bigger AI disasters. For example, AI taking control of the air traffic control system and causing reroutes or accidents, or AI malware spreading faster than antivirus products can defend it creating a cyber pandemic. Thus, making sure bias is reduced and safety increased (DOD five AI principles) is key but certainly not easy or clear.
2. ChatGPT and Other Artificial Intelligence (AI) Tools Have Huge Security Risks:
It is fair to start off discussing the risks posed by ChatGPT and related tools to balance out all the positive feature coverage in the media and popular culture in recent months. First of all, with artificial intelligence (AI), every cyber threat actor has a new tool to better send spam, steal data, spread malware, build misinformation mills, grow botnets, launder cryptocurrency through shady exchanges [20], create fake profiles on multiple platforms, create fake romance chatbots, and to build the most complex self-replicating malware that will be akin to zero-day exploits much of the time.
One commentator described it this way in his well circulated LinkedIn article, “It can potentially be a formidable social engineering and phishing weapon where non-native speakers can create flawlessly written phishing emails. Also, it will be much simpler for all scammers to mimic their intended victim’s tone, word choice, and writing style, making it more difficult than ever for recipients to tell the difference between a genuine and fraudulent email” [21]. Think of MailChimp on steroids with a sophisticated AI team crafting millions and billions of phishing e-mails / texts customized to impressively realistic details including phone calls with fake voices that mimic your loved ones building fake corroboration [22].
SAP’s Head of Cybersecurity Market Strategy, Gabriele Fiata, took the words out of our mouths when he described it this way, “The threat landscape surrounding artificial intelligence (AI) is expanding at an alarming rate. Between January to February 2023, Darktrace researchers have observed a 135% increase in “novel social engineering” attacks, corresponding with the widespread adoption of ChatGPT” [23]. This is just the beginning. More malware as a service propagation, fake bank sites, travel scams, and fake IT support centers will multiply to scam and extort the weak including, elders, schools, local government, and small businesses. Then there is the increased likelihood that antivirus and data loss prevention (DLP) tools will become less effective as AI morphs. Lastly, cyber criminals can and will use generative AI for advanced evidence tampering by creating fake content to confuse or dirty the chain of custody, lessen reliability, or outright frame the wrong actor – while the government is confused and behind the tech sector. It is truly a digital arms race.
In the next section we will discuss the possibilities of how artificial intelligence (AI) can enhance information security increasing compliance, reducing risk, enabling new features of great value, and enabling application orchestration for threat visibility.
3. The Zero-Trust Security Model Becomes More Orchestrated via Artificial Intelligence (AI):
The zero-trust model assumes that no user or system, even those within the corporate network, should be trusted by default. Access controls are strictly enforced, and continuous verification is performed to ensure the legitimacy of users and devices. Zero-trust moves organizations to a need-to-know-only access mindset (least privilege) with inherent deny rules, all the while assuming you are compromised. This infers single sign-on at the personal device level and improved multifactor authentication. It also infers better role-based access controls (RBAC), firewalled networks, improved need-to-know policies, effective whitelisting and blacklisting of applications, group membership reviews, and state of the art privileged access management (PAM) tools. Password check out and vaulting tools like CyberArk will improve to better inform toxic combination monitoring and reporting. There is still work in selecting / building the right tech components that fit into (not work against) the infrastructure orchestra stack. However, we believe rapid build and deploy AI based custom middleware can alleviate security orchestration mismatches in many cases easily. All of this is likely to better automate and orchestrate zero-trust abilities so that one part does not hinder another part via complexity fog.
4. Artificial Intelligence (AI) Powered Threat Detection Has Improved Analytics:
Artificial intelligence (AI) is increasingly being used to enhance threat detection capabilities. Machine learning algorithms analyze vast amounts of data to identify patterns indicative of potential security threats. This enables quicker and more accurate identification of malicious activities. Security information and event management (SIEM) systems enhanced with improved machine learning algorithms can detect anomalies in network traffic, application logs, and data flow – helping organizations identify potential security incidents faster.
There will be reduced false positives which has been a sustained issue in the past with large overconfident companies repeatedly wasting millions of dollars per year fine tuning useless data security lakes (we have seen this) that mostly produce garbage anomaly detection reports [25], [26]. Literally the kind good artificial intelligence (AI) laughs at – we are getting there. All the while, the technology vendors try to solve this via better SIEM functionality for an increased price at present. Yet we expect prices to drop really low as the automation matures.
With improved natural language processing (NLP) techniques, artificial intelligence (AI) systems can analyze unstructured data sources, such as social media feeds, photos, videos, and news articles – to assemble useful threat intelligence. This ability to process and understand textual data empowers organizations to stay informed about indicators of compromise (IOCs) and new attack tactics. Vendors that provide these services include Dark Trace, IBM, CrowdStrike, and many startups will likely join soon. This space is wide open and the biases of the past need to be forgotten if we want innovation. Young fresh minds who know web 3.0 are valuable here. Thus, in the future more companies will likely not have to buy but rather can build their own customized threat detection tools informed by advancements in AI platform technology.
5. Quantum-Safe Cryptography Ripens:
Quantum computing is a quickly evolving technology that uses the laws of quantum mechanics to solve problems too complex for traditional computers, like superposition and quantum interference [27]. Some cases where quantum computers can provide a speed boost include simulation of physical systems, machine learning (ML), optimization, and more. Traditional cryptographic algorithms could be vulnerable because they were built and coded with weaker technologies that have solvable patterns, at least in many cases. “Industry experts generally agree that within 7-10 years, a large-scale quantum computer may exist that can run Shor’s algorithm and break current public-key cryptography causing widespread vulnerabilities” [28]. Quantum-safe or quantum-resistant cryptography is designed to withstand attacks from quantum computers, often artificial intelligence (AI) assisted – ensuring the long-term security of sensitive data. For example, AI can help enhance post-quantum cryptographic algorithms such as lattice-based cryptography or hash-based cryptography to secure communications [29]. Lattice-based cryptography is a cryptographic system based on the mathematical concept of a lattice. In a lattice, lines connect points to form a geometric structure or grid (Fig. 5).
This geometric lattice structure encodes and decodes messages. Although it looks finite, the grid is not finite in any way. Rather, it represents a pattern that continues into the infinite (Fig. 6).
Lattice based cryptography benefits sensitive and highly targeted assets like large data centers, utilities, banks, hospitals, and government infrastructure generally. In other words, there will likely be mass adoption of quantum computing based encryption for better security. Lastly, we used ChatGPT as an assistant to compile the below specific benefits of quantum cryptography albeit with some manual corrections [32]:
Detection of Eavesdropping: Quantum key distribution protocols can detect the presence of an eavesdropper by the disturbance introduced during the quantum measurement process, providing a level of security beyond traditional cryptography.
Quantum-Safe Against Future Computers: Quantum computers have the potential to break many traditional cryptographic systems. Quantum cryptography is considered quantum-safe, as it relies on the fundamental principles of quantum mechanics rather than mathematical complexity.
Near Unconditional Security: Quantum cryptography provides near unconditional security based on the principles of quantum mechanics. Any attempt to intercept or measure the quantum state will disturb the system, and this disturbance can be detected. Note that ChatGPT wrongly said “unconditional Security” and we corrected to “near unconditional security” as that is more realistic.
Artificial intelligence (AI) is used not only for threat detection but also in automating response actions [33]. This can include automatically isolating compromised systems, blocking malicious internet protocol (IP) addresses, closing firewalls, or orchestrating a coordinated response to a cyber incident – all for less money. Security orchestration, automation, and response (SOAR) platforms leverage AI to analyze and respond to security incidents, allowing security teams to automate routine tasks and respond more rapidly to emerging threats. Microsoft Sentinel, Rapid7 InsightConnect, and FortiSOAR are just a few of the current examples. Basically, AI tools will help SOAR tools mature so security operations centers (SOCs) can catch the low hanging fruit; thus, they will have more time for analysis of more complex threats. These AI tools will employ the observe, orient, decide, act (OODA) Loop methodology [34]. This will allow them to stay up to date, customized, and informed of many zero-day exploits. At the same time, threat actors will constantly try to avert this with the same AI but with no governance.
As organizations increasingly migrate to cloud environments, ensuring the security of cloud assets becomes key. Vendors like Microsoft, Oracle, and Amazon Web Services (AWS) lead this space; yet large organizations have their own clouds for control as well. Cloud security posture management (CSPM) tools help organizations manage and secure their cloud infrastructure by continuously monitoring configurations and detecting misconfigurations that could lead to vulnerabilities [35]. These tools automatically assess cloud configurations for compliance with security best practices. This includes ensuring that only necessary ports are open, and that encryption is properly configured. “Keeping data safe in the cloud requires a layered defense that gives organizations clear visibility into the state of their data. This includes enabling organizations to monitor how each storage bucket is configured across all their storage services to ensure their data is not inadvertently exposed to unauthorized applications or users” [36]. This has considerations at both the cloud user and provider level especially considering artificial intelligence (AI) applications can be built and run inside the cloud for a variety of reasons. Importantly, these build designs often use approved plug ins from different vendors making it all the more complex.
Artificial intelligence (AI) is being utilized to strengthen user authentication methods. Behavioral biometrics, such as analyzing typing patterns, mouse movements and ram usage, can add an extra layer of security by recognizing the unique behavior of legitimate users. Systems that use AI to analyze user behavior can detect and flag suspicious activity, such as an unauthorized user attempting to access an account or escalate a privilege [37]. Two factor authentication remains the bare standard with many leading identity and access management (IAM) application makers including Okta, SailPoint, and Google experimenting with AI for improved analytics and functionality. Both two factor and multifactor authentication benefit from AI advancements with machine learning via real time access rights reassignment and improved role groupings [38]. However, multifactor remains stronger at this point because it includes something you are, biometrics. The jury is out on which method will remain the security leader because biometrics can be faked by AI [39]. Importantly, AI tools can remove fake accounts or orphaned accounts much more quickly, reducing risk. However, it likely will not get it right 100% of the time so there is a slight inconvenience.
Conclusion and Recommendations:
Artificial intelligence (AI) remains a leading catalyst for digital transformation in tech automation, identity and access management (IAM), big data analytics, technology orchestration, and collaboration tools. AI based quantum computing serves to bolster encryption when old methods are replaced. All of the government actions to incubate ethics in AI are a good start and the NIST AI Risk Management Framework (AI RMF) 1.0 is long overdue. It will likely be tweaked based on private sector feedback. However, adding the DOD five principles for the ethical development of AI to the NIST AI RMF could derive better synergies. This approach should be used by the private sector and academia in customized ways. AI product ethical deviations should be thought of as quality control and compliance issues and remediated immediately.
Organizations should consider forming an AI governance committee to make sure this unique risk is not overlooked or overly merged with traditional web / IT risk. ChatGPT is a good encyclopedia and a cool Boolean search tool, yet it got some things wrong about quantum computing in this article for which we cited and corrected. The Simplified AI text to graphics generator was cool and useful but it needed some manual edits as well. Both of these generative AI tools will likely get better with time.
Artificial intelligence (AI) will spur many mobile malware and ransomware variants faster than Apple and Google can block them. This in conjunction with the fact that people more often have no mobile antivirus on their smart phone even if they have it on their personal and work computers, and a culture of happy go lucky application downloading makes it all the worse. As a result, more breaches should be expected via smart phones / watches / eyeglasses from AI enabled threats.
Therefore, education and awareness around the review and removal of non-essential mobile applications is a top priority. Especially for mobile devices used separately or jointly for work purposes. Containerization is required via a mobile device management (MDM) tool such as JAMF, Hexnode, VMWare, or Citrix Endpoint Management. A bring your own device (BYOD) policy needs to be written, followed, and updated often informed by need-to-know and role-based access (RBAC) principles. This requires a better understanding of geolocation, QR code scanning, couponing, digital signage, in-text ads, micropayments, Bluetooth, geofencing, e-readers, HTML5, etc. Organizations should consider forming a mobile ecosystem security committee to make sure this unique risk is not overlooked or overly merged with traditional web / IT risk. Mapping the mobile ecosystem components in detail is a must including the AI touch points.
The growth and acceptability of mass work from home (WFH) combined with the mass resignation / gig economy remind employers that great pay and culture alone are not enough to keep top talent. At this point AI only takes away some simple jobs but creates AI support jobs, yet the percents of this are not clear this early. Signing bonuses and personalized treatment are likely needed for those with top talent. We no longer have the same office and thus less badge access is needed. Single sign-on (SSO) will likely expand to personal devices (BYOD) and smart phones / watches / eyeglasses. Geolocation-based authentication is here to stay with double biometrics, likely fingerprint, eye scan, typing patterns, and facial recognition. The security perimeter remains more defined by data analytics than physical / digital boundaries, and we should dashboard this with machine learning tools as the use cases evolve.
Cloud infrastructure will continue to grow fast creating perimeter and compliance complexity / fog. Organizations should preconfigure artificial intelligence (AI) based cloud-scale options and spend more on cloud-trained staff. They should also make sure that they are selecting more than two or three cloud providers, all separate from one another. This helps staff get cross-trained on different cloud platforms and plug in applications. It also mitigates risk and makes vendors bid more competitively. There is huge potential for AI synergies with Cloud Security Posture Management (CSPM) tools, and threat response tools – experimentation will likely yield future dividends. Organization should not be passive and stuck in old paradigms. The older generations should seek to learn from the younger generations without bias. Also, comprehensive logging is a must for AI tools.
In regard to cryptocurrency, non-fungible tokens (NFTs), initial coin offerings (ICOs), and related exchanges – artificial intelligence (AI) will be used by crypto scammers and those seeking to launder money. Watch out for scammers who make big claims without details, no white papers or filings, or explanations at all. No matter what the investment, find out how it works and ask questions about where your money is going. Honest investment managers and advisors want to share that information and will back it up with details in many documents and filings [40]. Moreover, better blacklisting by crypto exchanges and banks is needed to stop these illicit transactions erroring far on the side of compliance. This requires us to pay more attention to knowing and monitoring our own social media baselines – emerging AI data analytics can help here. If you are for and use crypto mixer and / or splitter services then you run the risk of having your digital assets mixed with dirty digital assets, you have high fees, you have zero customer service, no regulatory protection, no decent Terms of Service and / or Privacy Policy if any, and you have no guarantee that it will even work the way you think it will.
As security professionals, we are patriots and defenders of wherever we live and work. We need to know what our social media baseline is across platforms. IT and security professionals need to realize that alleviating disinformation is about security before politics. We should not be afraid to talk about this because if we are, then our organizations will stay weak and outdated and we will be plied by the same artificial intelligence (AI) generated political bias that we fear confronting. More social media training is needed as many security professionals still think it is mostly an external marketing thing.
It’s best to assume AI tools are reading all social media posts and all other available articles, including this article which we entered into ChatGPT for feedback. It was slightly helpful pointing out other considerations. Public-to-private partnerships (InfraGard) need to improve and application to application permissions need to be more scrutinized. Everyone does not need to be a journalist, but everyone can have the common sense to identify AI / malware-inspired fake news. We must report undue AI bias in big tech from an IT, compliance, media, and a security perspective. We must also resist the temptation to jump on the AI hype bandwagon but rather should evaluate each tool and use case based on the real-world business outcomes for the foreseeable future.
About the Authors:
Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist / researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.
Matthew Versaggi is a senior leader in artificial intelligence with large company healthcare experience who has seen hundreds of use-cases. He is a distinguished engineer, built an organization’s “College of Artificial Intelligence”, introduced and matured both cognitive AI technology and quantum computing, has been awarded multiple patents, is an experienced public speaker, entrepreneur, strategist and mentor, and has international business experience. He has an MBA in international business and economics and a MS in artificial intelligence from DePaul University, has a BS in finance and MIS and a BA in computer science from Alfred University. Lastly, he has nearly a dozen professional certificates in AI that are split between the AI, technology, and business strategy.
[37] Muneer, Salman Muneer, Muhammad Bux Alvi, and Amina Farrakh; “Cyber Security Event Detection Using Machine Learning Technique.” International Journal of Computational and Innovative Sciences. Vol. 2, no (2): pg. 42-46. 2023: https://ijcis.com/index.php/IJCIS/article/view/65.
[38] Azhar, Ishaq; “Identity Management Capability Powered by Artificial Intelligence to Transform the Way User Access Privileges Are Managed, Monitored and Controlled.” International Journal of Creative Research Thoughts (IJCRT), ISSN:2320-2882, Vol. 9, Issue 1: pg. 4719-4723. January 2021: https://ssrn.com/abstract=3905119
Fig. 1. Swenson, Jeremy, Stock; AI and InfoSec Trade-offs. 2024.
Disruptive technology refers to innovations or advancements that significantly alter the existing market landscape by displacing established technologies, products, or services, often leading to the transformation of entire industries. These innovations introduce novel approaches, functionalities, or business models that challenge traditional practices, creating a substantial impact on how businesses operate (ChatGPT, 2024). Disruptive technologies typically emerge rapidly, offering unique solutions that are more efficient, cost-effective, or user-friendly than their predecessors.
The disruptive nature of these technologies often leads to a shift in market dynamics, digital cameras or smartphones for example. These with new entrants or previously marginalized players gain prominence while established entities may face challenges in adapting to the transformative changes (ChatGPT, 2024). Examples of disruptive technologies include the advent of the internet, mobile technology, and artificial intelligence (AI), each reshaping industries and societal norms. Here are four of the leading AI tools:
1. OpenAI’s GPT:
OpenAI’s GPT (Generative Pre-trained Transformer) models, including GPT-3 and GPT-2, are predecessors to ChatGPT. These models are known for their large-scale language understanding and generation capabilities. GPT-3, in particular, is one of the most advanced language models, featuring 175 billion parameters.
2. Microsoft’s DialoGPT:
DialoGPT is a conversational AI model developed by Microsoft. It is an extension of the GPT architecture but fine-tuned specifically for engaging in multi-turn conversations. DialoGPT exhibits improved dialogue coherence and contextual understanding, making it a competitor in the chatbot space.
3. Facebook’s BlenderBot:
BlenderBot is a conversational AI model developed by Facebook. It aims to address the challenges of maintaining coherent and contextually relevant conversations. BlenderBot is trained using a diverse range of conversations and exhibits improved performance in generating human-like responses in chat-based interactions.
4. Rasa:
Rasa is an open-source conversational AI platform that focuses on building chatbots and voice assistants. Unlike some other models that are pre-trained on large datasets, Rasa allows developers to train models specific to their use cases and customize the behavior of the chatbot. It is known for its flexibility and control over the conversation flow.
Here is a list of the pros and cons of AI-based infosec capabilities.
Pros of AI in InfoSec:
1. Improved Threat Detection:
AI enables quicker and more accurate detection of cybersecurity threats by analyzing vast amounts of data in real-time and identifying patterns indicative of malicious activities. Security orchestration, automation, and response (SOAR) platforms leverage AI to analyze and respond to security incidents, allowing security teams to automate routine tasks and respond more rapidly to emerging threats. Microsoft Sentinel, Rapid7 InsightConnect, and FortiSOAR are just a few of the current examples
2. Behavioral Analysis:
AI can perform behavioral analysis to identify anomalies in user behavior or network activities, helping detect insider threats or sophisticated attacks that may go unnoticed by traditional security measures. Behavioral biometrics, such as analyzing typing patterns, mouse movements and ram usage, can add an extra layer of security by recognizing the unique behavior of legitimate users. Systems that use AI to analyze user behavior can detect and flag suspicious activity, such as an unauthorized user attempting to access an account or escalate a privilege.
3. Enhanced Phishing Detection:
AI algorithms can analyze email patterns and content to identify and block phishing attempts more effectively, reducing the likelihood of successful social engineering attacks.
4. Automation of Routine Tasks:
AI can automate repetitive and routine tasks, allowing cybersecurity professionals to focus on more complex issues. This helps enhance efficiency and reduces the risk of human error.
5. Adaptive Defense Systems:
AI-powered security systems can adapt to evolving threats by continuously learning and updating their defense mechanisms. This adaptability is crucial in the dynamic landscape of cybersecurity.
6. Quick Response to Incidents:
AI facilitates rapid response to security incidents by providing real-time analysis and alerts. This speed is essential in preventing or mitigating the impact of cyberattacks.
Cons of AI in InfoSec:
1. Sophistication of Attacks:
As AI is integrated into cybersecurity defenses, attackers may also leverage AI to create more sophisticated and adaptive threats, leading to a continuous escalation in the complexity of cyberattacks.
2. Ethical Concerns:
The use of AI in cybersecurity raises ethical considerations, such as privacy issues, potential misuse of AI for surveillance, and the need for transparency in how AI systems operate.
3. Cost and Resource Intensive:
Implementing and maintaining AI-powered security systems can be resource-intensive, both in terms of financial investment and skilled personnel required for development, implementation, and ongoing management.
4. False Positives and Negatives:
AI systems are not infallible and may produce false positives (incorrectly flagging normal behavior as malicious) or false negatives (failing to detect actual threats). This poses challenges in maintaining a balance between security and user convenience.
5. Lack of Human Understanding:
AI lacks contextual understanding and human intuition, which may result in misinterpretation of certain situations or the inability to recognize subtle indicators of a potential threat. This is where QA and governance come in case something goes wrong.
6. Dependency on Training Data:
AI models rely on training data, and if the data used is biased or incomplete, it can lead to biased or inaccurate outcomes. Ensuring diverse and representative training data is crucial to the effectiveness of AI in InfoSec.
About the author:
Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist / researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.
Fig. 1. Former OpenAI CEO Sam Altman and Microsoft CEO Satya Nadella. Getty Images, 2023.
#chatGPT #Microsoft #openai #boardgovernance
Update: Sam Altman is returning to OpenAI as CEO, ending days of drama and negotiations with the help of heavy investor Microsoft and Silicon Valley insiders (Bloomberg, 11/22/23). In sum, there were more issues without Sam than with him and the board realized that pretty fast. So now some board members have to be shown the door.
Some may view a fired executive like Sam Altman as damaged goods but we all know that corporate boards get these things wrong all the time, and it’s more about office politics and cliques than substantive performance.
The board described their decision as a “deliberative review process which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.” Yet the board’s statement makes little sense and is out of context for an emerging technology at a time such as this.
As a result of this nonsensical firing, there was likely no job interview when Sam Altman joined Microsoft. He was already validated as a thought leader in the tech and generative AI community, so it was hardly needed. Microsoft CEO Satya Nadella was a fan and already invested billions into OpenAI. He saw the open opportunity and took it fast before another tech company could. The same thing happened when Oracle CEO Larry Ellison hired Mark Hurd in 2010 after HP fired him and the results were great.
This begs the question of how valuable are job interviews in the area of emerging tech or for people with visible achievements. What is the H.R. screener or some tech director in a fiefdom going to ask you? They would hardly understand the likely answers in a meaningful way anyway. I know many tech and business leaders who have wasted time in dumb interviews in contexts such as these and it is a poor reflection of the companies setting them up this way.
In other words, plenty of people will not want to work for OpenAI because of how Altman was publicly treated while Microsoft looks more inclusive and forward-thinking. So I am sure many people will leave OpenAI to follow Altman at Microsoft and that is really how OpenAI shot themselves in the foot especially considering Microsoft’s size.
Any failings and risks designed into ChatGPT are as much the problem of OpenAIs as it is every other company working in this vastly unknown and emerging area of tech. To blame that on Altman in this context seems unreasonable and thus he is a fall guy.
There are good and bad things with AI just like with any technology, yet the good far outweighs the bad in this context. Microsoft knows that there are problems in AI in cyber security, fraud, IP theft, and more. The bigger and more capable their AI team the better they can address these issues, now with Altman’s help.
Now, of course, Altman has to be evaluated on his performance at Microsoft making sure AI stays viable and within the approved guardrails, and hopefully innovates a few solutions to make society better. Yet the free market of other tech companies and regulators also have that responsibility.
About the Author:
Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. Over 17 years he has held progressive roles at many banks, insurance companies, retailers, healthcare orgs, and even governments including being a member of the Federal Reserve Secure Payment Task Force. Organizations relish in his ability to bridge gaps and flesh out hidden risk management solutions while at the same time improving processes. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. As a futurist, his writings on digital currency, the Target data breach, and Google combining Google + video chat with Google Hangouts video chat have been validated by many. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire.
Each year we like to review and commentate on the most impactful technology and business concepts that are likely to significantly impact the coming year. Although this list is incomplete, these are three items worth dissecting.
3. The Hyper Expansion of Cloud Services Will Spur Competition and Innovation:
Cloud computing is a utility that relies on shared resources to achieve a coherent economy of scales benefit – with high-powered services that are rapidly provisioned with minimal management effort via the internet (Fig. 1). It presently consists of these main areas: SaaS (software as a service), PaaS (platform as a service), and IaaS (infrastructure as a service). It is typically used for technology tool diversification, redundancy, disaster recovery, storage, cost reduction, high powered computer tests and models, and even as a globalization strategy. Cloud computing generated about $127 billion in 2017 and is projected to hit $500 billion by the year 2020. At this rate, we can expect many more product startups and consulting services firms to grow and consolidate in 2018 as they are forced to be more competitive thus bringing costs down.
The line between local and cloud computing is blurry because the cloud is part of almost all computer functions. Consumer-facing examples include: Microsoft OneDrive, Google Drive, GMAIL, and the iPhone infrastructure. Apple’s cloud services are primarily used for online storage, backups and synchronization of your mail, calendar, and contacts – all the data is available on iOS, Mac OS, and even on Windows devices via the iCloud control panel.
Fig. 1. Linked Use Cases for Cloud Computing.
More business sided examples include: Salesforce, SAP, IBM CRM, Oracle, Workday, VMware, Service Now, and Amazon Web Services. Amazon Cloud Drive offers storage for music, images purchased through Amazon Prime, as well as corporate level storages that extends services for anything digital. Amazon’s widespread adoption of hardware virtualization, service-oriented architecture with automated utilization will sustain the growth of cloud computing. With the cloud, companies of all sizes can get their applications up and running faster with less IT management involved and with much lower costs. Thus, they can focus on their core-business and market competition.
The big question for 2018 is what new services and twists will cloud computing offer the market and how will it change our lives. In tackling this question, we should try to imagine the unimaginable. Perhaps in 2018 the cloud will be the platform where combined supercomputers can use quantum computing and machine learning to make key breakthroughs in aerospace engineering and medical science. Additionally, virtual reality as a service sounds like the next big thing; we will coin it (VRAAS).
2. The Reversal of Net Neutrality is Awful for Privacy, Democracy, and Economics:
Before it was rolled back, net neutrality required service providers to treat all internet traffic equally. This is morally and logically correct because a free and open internet is just as important as freedom of the press, freedom of speech, and the free market concept. The internet should be able to enable startups, big companies, opposing media outlets, and legitimate governments in the same way and without favor. The internet is like air to all these sects of the economy and to the world.
Rolling back net neutrality is something the U.S. will regret in coming months. Although the implications of it are not fully known, it may mean that fewer data centers will be built in the U.S. and it may mean that smaller companies will be bullied out of business due to gamified imbalances of cost in internet bandwidth. Netflix and most tech companies dissented via social media resulting in viral support (Fig 2).
Fig 2. Viral Netflix Opposition to Rolling Back Net Neutrality.
Lastly, it exacerbates the gap between the rich and the poor and it enables the government to have a stronger hand in influencing the tenor of news media, social norms, and worst of all political bias. As fiber optic internet connectivity expands, and innovative companies like Google, Twitter, and Facebook turn into hybrid news sources, a fully free internet is the best thing to expose their own excesses, biases, and that there are legitimate conflicting viewpoints that can be easily found.
1. Amazon’s Purchase of Whole Foods Tells Us the Gap Between Retailer and Tech Service Company is Closing:
For quite a long time I have been a fan of Amazon because they were anti-retail establishment. In fact, in Amazon’s early days, it was the retail establishment that laughed at them suggesting they would flounder and fail. “How dare you sell used books by mail out of a garage”. Yet their business model has turned more into a technology and logistics platform than a product-oriented one. Many large and small retailers and companies of all types – employ their selling, shipping, and infrastructure platform to the degree that they are, in essence, married to Amazon. Magazine Business Insider said, “The most important deal of the year was Amazon’s $13.7 billion-dollar acquisition of Whole Foods. In one swoop, Amazon totally disrupted groceries, retail delivery, and even the enterprise IT market” (Weinberger, 12/17/17). The basis for this acquisition was that grocery delivery is underserved and has huge potential in the U.S. as the population grows, less people own cars, and people value not wasting time walking around a retail store so much (getting socialized to a new level of service) (Fig 3).
Fig. 3. How Amazon Can Use Whole Foods to Serve High Potential Grocery Delivery.
Mr. Swenson and Mr. Mebrahtu meet in graduate business school where they collaborated on global business projects concerning leadership, team dynamics, and strategic innovation. They have had many consulting stints at leading technology companies and presently work together indirectly at Optum / UHG. Mr. Swenson is a Sr. consultant, writer, and speaker in: business analysis, project management, cyber-security, process improvement, leadership, and abstract thinking. Mr. Mebrahtu is a Sr. developer, database consultant, agile specialist, application design and test consultant, and Sr. quality manager of database development.