Artificial Intelligence (AI) continues to drive massive innovation across industries, reshaping business operations, customer interactions, and cybersecurity landscapes. As AI’s capabilities grow, companies are leveraging key trends to stay competitive and secure. Below are six crucial AI trends transforming businesses today, alongside critical insights on securing AI infrastructure, promoting responsible AI use, and enhancing workforce efficiency in a digital world.
1. Generative AI’s Creative Expansion
Generative AI, known for producing content from text and images to music and 3D models, is expanding its reach into business innovation.[1] AI systems like GPT-4 and DALL·E are being applied across industries to automate creativity, allowing businesses to scale their marketing efforts, design processes, and product innovation.
Business Application: Marketing teams are using generative AI to create personalized, dynamic campaigns across digital platforms. Coca-Cola and Nike, for instance, have employed AI to tailor advertising content to different customer segments, improving engagement and conversion rates. Product designers in industries like fashion and automotive are also using generative models to prototype new designs faster than ever before.
2. AI-Powered Personalization
AI’s ability to analyze vast datasets in real time is driving hyper-personalized experiences for consumers. This trend is especially important in sectors like e-commerce and entertainment, where personalized recommendations significantly impact user engagement and loyalty.
Business Application: Streaming platforms like Netflix and Spotify rely on AI algorithms to provide tailored content recommendations based on users’ preferences, viewing habits, and search history.[2] Retailers like Amazon are also leveraging AI to offer personalized shopping experiences, recommending products based on past purchases and browsing behavior, further boosting customer satisfaction.
3. AI-Driven Automation in Operations
Automation powered by AI is optimizing operations and processes across industries, from manufacturing to customer service. By automating repetitive and manual tasks, businesses are reducing costs, improving efficiency, and reallocating resources to higher-value activities.
Business Application: Tesla and Siemens are implementing AI in robotic process automation (RPA) to streamline production lines and monitor equipment for potential breakdowns. In customer service, AI chatbots and virtual assistants are being used to handle routine inquiries, providing real-time support to customers while freeing human agents to address more complex issues.
4. Securing AI Infrastructure and Development Practices
As AI adoption grows, so does the need for robust security measures to protect AI infrastructure and development processes. AI systems are vulnerable to cyberattacks, data breaches, and unauthorized access, highlighting the importance of securing AI from development to deployment.
Business Application: Organizations are recognizing the importance of securing AI models, data, and networks through multi-layered security frameworks. The U.S. AI Safety Institute Consortium is actively developing guidelines for AI safety and security, including red-teaming and risk management practices, to ensure AI systems are resilient to attacks. DevSecOps needs to be on the front end of this. To address challenges in securing AI, companies are pushing for standardization in AI audits and evaluations, ensuring consistency in security practices across industries.
5. AI in Predictive Analytics and Decision-Making
Predictive analytics, powered by AI, is enabling companies to forecast trends, predict consumer behavior, and make data-driven decisions with greater accuracy. This is particularly valuable in finance, healthcare, and retail, where anticipating demand or market shifts can lead to significant competitive advantages.
Business Application: Financial institutions like JPMorgan Chase are using AI for predictive analytics to evaluate market conditions, identify investment opportunities, and manage risk.[3] Retailers such as Walmart are employing AI to forecast inventory needs, helping to optimize supply chains and reduce waste. Predictive analytics also allows companies to make proactive decisions regarding customer retention and product development.
6. AI for Enhanced Cybersecurity
AI plays an increasingly pivotal role in improving cybersecurity defenses. AI-driven systems are capable of detecting anomalies, identifying potential threats, and responding to attacks in real-time, offering advanced protection for both physical and digital assets.
Business Application: Leading organizations are integrating AI into cybersecurity protocols to automate threat detection and enhance system defenses. IBM’s AI-powered QRadar platform helps companies identify and respond to cyberattacks by analyzing network traffic and detecting unusual activity.[4] AI systems are also improving identity authentication through biometrics, ensuring that only authorized users gain access to sensitive data.
Moreover, businesses are adopting AI governance frameworks to secure their AI infrastructure and ensure ethical deployment. Evaluating risks associated with open- and closed-source AI development allows for transparency and the implementation of tailored security strategies across sectors.
7. Promoting Responsible AI Use and Security Governance
Beyond technical innovation, AI governance and responsible use are paramount to ensure that AI is developed and applied ethically. Promoting responsible AI use means adhering to best practices and security standards to prevent misuse and unintended harm. The NIST AI risk management framework is a good reference for this.[5]
Business Application: Companies are actively developing frameworks that incorporate ethical principles throughout the lifecycle of AI systems. Microsoft and Google are leading initiatives to mitigate bias and ensure transparency in AI algorithms. Governments and private sectors are also collaborating to develop standardized guidelines and security metrics, helping organizations maintain ethical compliance and robust cybersecurity.
8. Enhancing Workforce Efficiency and Skills Development
AI’s role in enhancing workforce efficiency is not limited to automating tasks. AI-driven training and simulations are transforming how organizations develop and retain talent, particularly in cybersecurity, where skilled professionals are in high demand.
Business Application: Companies are investing in AI-driven educational platforms that simulate real-world cybersecurity scenarios, helping employees hone their skills in a dynamic, hands-on environment. These AI-powered platforms allow for personalized learning, adapting to individual skill levels and providing targeted feedback. Additionally, AI is being used to identify skill gaps within teams and recommend tailored training programs, improving workforce readiness for future challenges. Yet, people who are AI capable still need to support these apps and managerial efforts.
Conclusion: AI’s Role in Business and Security Transformation
As AI tools advance rapidly, it’s wise to assume they can access and analyze all publicly available content, including social media posts and articles like this one. While AI can offer valuable insights, organizations must remain vigilant about how these tools interact with one another, ensuring that application-to-application permissions are thoroughly scrutinized. Public-private partnerships, such as InfraGard, need to be strengthened to address these evolving challenges. Not everyone needs to be a journalist, but having the common sense to detect AI- or malware-generated fake news is crucial. It’s equally important to report any AI bias within big tech from perspectives including IT, compliance, media, and security.
Amid the AI hype, organizations should resist the urge to adopt every new tool that comes along. Instead, they should evaluate each AI system or use case based on measurable, real-world outcomes. AI’s rapid evolution is transforming both business operations and cybersecurity practices. Companies that effectively leverage trends like generative AI, predictive analytics, and automation, while prioritizing security and responsible use, will be better positioned to lead in the digital era. Securing AI infrastructure, promoting ethical AI development, and investing in workforce skills are crucial for long-term success.
Cloud infrastructure is another area that will continue to expand quickly, adding complexity to both perimeter security and compliance. Organizations should invest in AI-based cloud solutions and prioritize hiring cloud-trained staff. Diversifying across multiple cloud providers can mitigate risk, promote vendor competition, and ensure employees gain cross-platform expertise.
To navigate this complex landscape, businesses should adopt ethical, innovative, and secure AI strategies. Forming an AI governance committee is essential to managing the unique risks posed by AI, ensuring they aren’t overlooked or mistakenly merged with traditional IT risks. The road ahead holds tremendous potential, and those who proceed with careful consideration and adaptability will lead the way in AI-driven transformation.
About the Author:
Jeremy A. Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and seasoned senior management tech risk and digital strategy consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds a certificate in Media Technology from Oxford University’s Media Policy Summer Institute, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota’s Technological Leadership Institute, an MBA from Saint Mary’s University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale, and New Hope Community Police Academy (MN), and the Minneapolis FBI Citizens Academy. You can follow him on LinkedIn and Twitter.
In today’s digitally interconnected world, the cloud has emerged as a cornerstone of modern business operations, offering scalability, flexibility, and efficiency like never before. Leading vendors like Amazon Web Services (AWS), Microsoft, Oracle, Dell, and Oracle offer public, private, and hybrid cloud formats. However, as businesses increasingly migrate their operations to the cloud, ensuring robust security measures becomes paramount. Here, we delve into seven essential strategies for securing the cloud effectively, emphasizing collaboration between C-suite leaders and IT stakeholders.
1) Understanding the Cloud-Shared Responsibility Model:
The first step in securing the cloud is grasping the nuances of the shared responsibility model (Fig. 1.). While cloud providers manage the security of the infrastructure platform, customers are responsible for securing their data and applications, including who gets access to them and at what level (Fig 1.). This necessitates a clear delineation of responsibilities, ensuring no security gaps exist. CIOs and CISOs must thoroughly educate themselves and their teams on this model to make informed security decisions.
2) Asking Detailed Security Questions:
It is imperative to engage cloud providers in detailed discussions regarding security measures, digging far deeper than boilerplate questions and checkbox forms. C-suite executives should inquire about specific security protocols, compliance certifications, incident response procedures, and data protection mechanisms. Organizations can mitigate risks and build trust in their cloud ecosystem by seeking transparency and understanding the provider’s security posture.
3) Implementing IAM Solutions:
Identity and access management (IAM) lies at the core of cloud security. Robust IAM solutions enable organizations to authenticate, authorize, and manage user access effectively. CIOs and CISOs should invest in IAM platforms equipped with features like multi-factor authentication, role-based access control, least privilege, and privileged access management (PAM) governance. By enforcing the principle of least privilege, businesses can minimize the risk of unauthorized access and insider threats.
4) Establishing Modern Cloud Security Policies:
A proactive approach to security entails the formulation of comprehensive cloud security policies aligned with industry best practices and regulatory requirements. Business leaders must collaborate with security professionals to develop policies covering data classification, incident response, encryption standards, and employee responsibilities. Regularly updating and reviewing these policies are essential to adapting to evolving threats and technologies — can be country specific.
5) Encrypting Data in Motion and at Rest:
Encryption serves as a critical safeguard for data confidentiality and integrity in the cloud. Organizations should employ robust encryption mechanisms to protect data both in transit and at rest. Utilizing encryption protocols such as TLS for network communications and AES for data storage adds an extra layer of defense against unauthorized access. Additionally, implementing reliable backup solutions ensures data resilience in the event of breaches or disasters. Having all key files backed up via the 3-2-1 rule — three copies of files in two different media forms with one offsite — thus reducing ransomware attack damage.
6) Educating Staff Regularly:
Human error remains one of the most significant vulnerabilities in cloud security. Therefore, ongoing employee education and awareness initiatives are indispensable. C-suite leaders must prioritize security training programs to cultivate a security-conscious culture across the organization. By educating staff on security best practices, threat awareness, and incident response protocols, businesses can fortify their defense against social engineering attacks and insider threats. Importantly, this education is far more effective when interactive and gamified to ensure participation and sustained learning outcomes.
7) Mapping and Securing Endpoints:
Endpoints serve as crucial entry points for cyber threats targeting cloud environments. CIOs and CISOs should conduct thorough assessments to identify and secure all endpoints accessing the cloud infrastructure. Visually mapping endpoints is the first step to confirm how many, what type, and where they actually are at present — this can and does change. Implementing endpoint protection solutions, enforcing device management policies, and promptly deploying security patches are essential to mitigate endpoint vulnerabilities. Furthermore, embracing technologies like zero-trust architecture enhances endpoint security by continuously verifying user identities and device integrity.
In conclusion, securing the cloud demands a multifaceted approach encompassing collaboration, diligence, vendor communication and partnership, and innovation. By embracing the principles outlined above, organizations can strengthen their cloud security posture, mitigate risks, and foster a resilient business environment. C-suite leaders, in conjunction with IT professionals, must champion these strategies to navigate the evolving threat landscape and safeguard the future of their enterprises.
About the Author:
Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.
Fig. 1. Quantum ChatGPT Growth Plus NIST AI Risk Management Framework Mashup [1], [2], [3].
Summary:
This year is unique since policy makers and business leaders grew concerned with artificial intelligence (AI) ethics, disinformation morphed, AI had hyper growth including connections to increased crypto money laundering via splitting / mixing. Impressively, AI cyber tools become more capable in the areas of zero-trust orchestration, cloud security posture management (CSPM), threat response via improved machine learning, quantum-safe cryptography ripened, authentication made real time monitoring advancements, while some hype remains. Moreover, the mass resignation / gig economy (remote work) remained a large part of the catalyst for all of these trends.
Introduction:
Every year we like to research and comment on the most impactful security technology and business happenings from the prior year. This year is unique since policy makers and business leaders grew concerned with artificial intelligence (AI) ethics [4], disinformation morphed, AI had hyper growth [5], crypto money laundering via splitting / mixing grew [6], AI cyber tools became more capable – while the mass resignation / gig economy remained a large part of the catalyst for all of these trends. By August 2023 ChatGPT reached 1.43 billion website visits per month and about 180.5 million registered users [7]. This even attracted many non-technical naysayers. Impressively, the platform was only nine months old then and just turned a year old in November [8]. These numbers for AI tools like ChatGPT are going to continue to grow in many sectors at exponential rates. As a result, the below trends and considerations are likely to significantly impact government, education, high-tech, startups, and large enterprises in big and small ways, albeit with some surprises.
1. The Complex Ethics of Artificial Intelligence (AI) Swarms Policy Makers and Industry Resulting in New Frameworks:
The ethical use of artificial intelligence (AI) as a conceptual and increasingly practical dilemma has gained a lot of media attention and research in the last few years by those in philosophy (ethics, privacy), politics (public policy), academia (concepts and principles), and economics (trade policy and patents) – all who have weighed in heavily. As a result, we find this space is beginning to mature. Sovereign nations (The USA, EU, and elsewhere globally) have developed and socialized ethical policies and frameworks [9], [10]. While major corporations motivated by profit are all devising their own ethical vehicles and structures – often taking a legalistic view first [11]. Moreover, The World Economic Forum (WEF) has weighed in on this matter in collaboration with PricewaterhouseCoopers (PWC) [12]. All of this contributes to the accelerated pace of maturity of this area in general. The result is the establishment of shared conceptual viewpoints, early-stage security frameworks, accepted policies, guidelines, and governance structures to support the evolution of artificial intelligence (AI) in ethical ways.
For example, the Department of Defense (DOD) has formally adopted five principles for the ethical development of artificial intelligence capabilities as follows [13]:
Responsible
Equitable
Traceable
Reliable
Governable
Traceable and governable seem to be the most clear and important principles, while equitable and responsible seem gray at best and they could be deemphasized in a heightened war time context. The latter two echo the corporate social responsibility (CSR) efforts found more often in the private sector.
The WEF via PWC has issued its Nine AI Ethical Principles for organizations to follow [14], and The Office of the Director of National Intelligence (ODNI) has released their Framework for AI Ethics [15]. Importantly, The National Institute For Standards in Technology (NIST) has released their AI Risk Management Framework as outlined in Fig. 2. and 3. They also released a playbook to support its implementation and have hosted several working sessions discussing it with industry which we attended virtually [16]. It seems the mapping aspect could take you down many AI rabbit holes, some unforeseen – inferring complex risk. Mapping also impacts how you measure and manage. None of this is fully clear and much of it will change as ethical AI governance matures.
Fig. 3. NIST AI Risk Management Framework: Actors Across AI Lifecycle Stages (AI RMF) 1.0 [18].
The actors in Fig. 3. cover a wide swath of spaces where artificial intelligence (AI) plays, and appropriately so as AI is considered a GPT (general purpose technology) like electricity, rubber, and the like – where it can be applied ubiquitously in our lives [19]. This infers cognitive technology, digital reality, ambient experiences, autonomous vehicles and drones, quantum computing, distributed ledgers, and robotics to name a few. These were all prior to the emergence of generative AI on the scene which will likely put these vehicles to the test much earlier than expected. Yet all of these can be mapped across the AI lifecycle stages in Fig. 3. to clarify the activities, actors, dimensions, and if it gets to build, then more scrutiny will need to be applied.
Scrutiny can come in the form of DevSecOps but that is extremely hard to do with such exponentially massive AI code datasets required by the learning models, at least at this point. Moreover, we are not sure if any AI ethics framework does justice to quality assurance (QA) and secure coding best practices much at this point. However, the above two NIST figures at least clarify relationships, flows, inputs and outputs, but all of this will need to be greatly customized to an organization to have any teeth. We imagine those use cases will come out of future NIST working sessions with industry.
Lastly, the most crucial factor in AI ethics governance is what Fig. 3. calls “People and Planet”. This is because the people and planet can experience the negative aspects of AI in ways the designers did not imagine, and that feedback is valuable to product governance to prevent bigger AI disasters. For example, AI taking control of the air traffic control system and causing reroutes or accidents, or AI malware spreading faster than antivirus products can defend it creating a cyber pandemic. Thus, making sure bias is reduced and safety increased (DOD five AI principles) is key but certainly not easy or clear.
2. ChatGPT and Other Artificial Intelligence (AI) Tools Have Huge Security Risks:
It is fair to start off discussing the risks posed by ChatGPT and related tools to balance out all the positive feature coverage in the media and popular culture in recent months. First of all, with artificial intelligence (AI), every cyber threat actor has a new tool to better send spam, steal data, spread malware, build misinformation mills, grow botnets, launder cryptocurrency through shady exchanges [20], create fake profiles on multiple platforms, create fake romance chatbots, and to build the most complex self-replicating malware that will be akin to zero-day exploits much of the time.
One commentator described it this way in his well circulated LinkedIn article, “It can potentially be a formidable social engineering and phishing weapon where non-native speakers can create flawlessly written phishing emails. Also, it will be much simpler for all scammers to mimic their intended victim’s tone, word choice, and writing style, making it more difficult than ever for recipients to tell the difference between a genuine and fraudulent email” [21]. Think of MailChimp on steroids with a sophisticated AI team crafting millions and billions of phishing e-mails / texts customized to impressively realistic details including phone calls with fake voices that mimic your loved ones building fake corroboration [22].
SAP’s Head of Cybersecurity Market Strategy, Gabriele Fiata, took the words out of our mouths when he described it this way, “The threat landscape surrounding artificial intelligence (AI) is expanding at an alarming rate. Between January to February 2023, Darktrace researchers have observed a 135% increase in “novel social engineering” attacks, corresponding with the widespread adoption of ChatGPT” [23]. This is just the beginning. More malware as a service propagation, fake bank sites, travel scams, and fake IT support centers will multiply to scam and extort the weak including, elders, schools, local government, and small businesses. Then there is the increased likelihood that antivirus and data loss prevention (DLP) tools will become less effective as AI morphs. Lastly, cyber criminals can and will use generative AI for advanced evidence tampering by creating fake content to confuse or dirty the chain of custody, lessen reliability, or outright frame the wrong actor – while the government is confused and behind the tech sector. It is truly a digital arms race.
In the next section we will discuss the possibilities of how artificial intelligence (AI) can enhance information security increasing compliance, reducing risk, enabling new features of great value, and enabling application orchestration for threat visibility.
3. The Zero-Trust Security Model Becomes More Orchestrated via Artificial Intelligence (AI):
The zero-trust model assumes that no user or system, even those within the corporate network, should be trusted by default. Access controls are strictly enforced, and continuous verification is performed to ensure the legitimacy of users and devices. Zero-trust moves organizations to a need-to-know-only access mindset (least privilege) with inherent deny rules, all the while assuming you are compromised. This infers single sign-on at the personal device level and improved multifactor authentication. It also infers better role-based access controls (RBAC), firewalled networks, improved need-to-know policies, effective whitelisting and blacklisting of applications, group membership reviews, and state of the art privileged access management (PAM) tools. Password check out and vaulting tools like CyberArk will improve to better inform toxic combination monitoring and reporting. There is still work in selecting / building the right tech components that fit into (not work against) the infrastructure orchestra stack. However, we believe rapid build and deploy AI based custom middleware can alleviate security orchestration mismatches in many cases easily. All of this is likely to better automate and orchestrate zero-trust abilities so that one part does not hinder another part via complexity fog.
4. Artificial Intelligence (AI) Powered Threat Detection Has Improved Analytics:
Artificial intelligence (AI) is increasingly being used to enhance threat detection capabilities. Machine learning algorithms analyze vast amounts of data to identify patterns indicative of potential security threats. This enables quicker and more accurate identification of malicious activities. Security information and event management (SIEM) systems enhanced with improved machine learning algorithms can detect anomalies in network traffic, application logs, and data flow – helping organizations identify potential security incidents faster.
There will be reduced false positives which has been a sustained issue in the past with large overconfident companies repeatedly wasting millions of dollars per year fine tuning useless data security lakes (we have seen this) that mostly produce garbage anomaly detection reports [25], [26]. Literally the kind good artificial intelligence (AI) laughs at – we are getting there. All the while, the technology vendors try to solve this via better SIEM functionality for an increased price at present. Yet we expect prices to drop really low as the automation matures.
With improved natural language processing (NLP) techniques, artificial intelligence (AI) systems can analyze unstructured data sources, such as social media feeds, photos, videos, and news articles – to assemble useful threat intelligence. This ability to process and understand textual data empowers organizations to stay informed about indicators of compromise (IOCs) and new attack tactics. Vendors that provide these services include Dark Trace, IBM, CrowdStrike, and many startups will likely join soon. This space is wide open and the biases of the past need to be forgotten if we want innovation. Young fresh minds who know web 3.0 are valuable here. Thus, in the future more companies will likely not have to buy but rather can build their own customized threat detection tools informed by advancements in AI platform technology.
5. Quantum-Safe Cryptography Ripens:
Quantum computing is a quickly evolving technology that uses the laws of quantum mechanics to solve problems too complex for traditional computers, like superposition and quantum interference [27]. Some cases where quantum computers can provide a speed boost include simulation of physical systems, machine learning (ML), optimization, and more. Traditional cryptographic algorithms could be vulnerable because they were built and coded with weaker technologies that have solvable patterns, at least in many cases. “Industry experts generally agree that within 7-10 years, a large-scale quantum computer may exist that can run Shor’s algorithm and break current public-key cryptography causing widespread vulnerabilities” [28]. Quantum-safe or quantum-resistant cryptography is designed to withstand attacks from quantum computers, often artificial intelligence (AI) assisted – ensuring the long-term security of sensitive data. For example, AI can help enhance post-quantum cryptographic algorithms such as lattice-based cryptography or hash-based cryptography to secure communications [29]. Lattice-based cryptography is a cryptographic system based on the mathematical concept of a lattice. In a lattice, lines connect points to form a geometric structure or grid (Fig. 5).
This geometric lattice structure encodes and decodes messages. Although it looks finite, the grid is not finite in any way. Rather, it represents a pattern that continues into the infinite (Fig. 6).
Lattice based cryptography benefits sensitive and highly targeted assets like large data centers, utilities, banks, hospitals, and government infrastructure generally. In other words, there will likely be mass adoption of quantum computing based encryption for better security. Lastly, we used ChatGPT as an assistant to compile the below specific benefits of quantum cryptography albeit with some manual corrections [32]:
Detection of Eavesdropping: Quantum key distribution protocols can detect the presence of an eavesdropper by the disturbance introduced during the quantum measurement process, providing a level of security beyond traditional cryptography.
Quantum-Safe Against Future Computers: Quantum computers have the potential to break many traditional cryptographic systems. Quantum cryptography is considered quantum-safe, as it relies on the fundamental principles of quantum mechanics rather than mathematical complexity.
Near Unconditional Security: Quantum cryptography provides near unconditional security based on the principles of quantum mechanics. Any attempt to intercept or measure the quantum state will disturb the system, and this disturbance can be detected. Note that ChatGPT wrongly said “unconditional Security” and we corrected to “near unconditional security” as that is more realistic.
Artificial intelligence (AI) is used not only for threat detection but also in automating response actions [33]. This can include automatically isolating compromised systems, blocking malicious internet protocol (IP) addresses, closing firewalls, or orchestrating a coordinated response to a cyber incident – all for less money. Security orchestration, automation, and response (SOAR) platforms leverage AI to analyze and respond to security incidents, allowing security teams to automate routine tasks and respond more rapidly to emerging threats. Microsoft Sentinel, Rapid7 InsightConnect, and FortiSOAR are just a few of the current examples. Basically, AI tools will help SOAR tools mature so security operations centers (SOCs) can catch the low hanging fruit; thus, they will have more time for analysis of more complex threats. These AI tools will employ the observe, orient, decide, act (OODA) Loop methodology [34]. This will allow them to stay up to date, customized, and informed of many zero-day exploits. At the same time, threat actors will constantly try to avert this with the same AI but with no governance.
As organizations increasingly migrate to cloud environments, ensuring the security of cloud assets becomes key. Vendors like Microsoft, Oracle, and Amazon Web Services (AWS) lead this space; yet large organizations have their own clouds for control as well. Cloud security posture management (CSPM) tools help organizations manage and secure their cloud infrastructure by continuously monitoring configurations and detecting misconfigurations that could lead to vulnerabilities [35]. These tools automatically assess cloud configurations for compliance with security best practices. This includes ensuring that only necessary ports are open, and that encryption is properly configured. “Keeping data safe in the cloud requires a layered defense that gives organizations clear visibility into the state of their data. This includes enabling organizations to monitor how each storage bucket is configured across all their storage services to ensure their data is not inadvertently exposed to unauthorized applications or users” [36]. This has considerations at both the cloud user and provider level especially considering artificial intelligence (AI) applications can be built and run inside the cloud for a variety of reasons. Importantly, these build designs often use approved plug ins from different vendors making it all the more complex.
Artificial intelligence (AI) is being utilized to strengthen user authentication methods. Behavioral biometrics, such as analyzing typing patterns, mouse movements and ram usage, can add an extra layer of security by recognizing the unique behavior of legitimate users. Systems that use AI to analyze user behavior can detect and flag suspicious activity, such as an unauthorized user attempting to access an account or escalate a privilege [37]. Two factor authentication remains the bare standard with many leading identity and access management (IAM) application makers including Okta, SailPoint, and Google experimenting with AI for improved analytics and functionality. Both two factor and multifactor authentication benefit from AI advancements with machine learning via real time access rights reassignment and improved role groupings [38]. However, multifactor remains stronger at this point because it includes something you are, biometrics. The jury is out on which method will remain the security leader because biometrics can be faked by AI [39]. Importantly, AI tools can remove fake accounts or orphaned accounts much more quickly, reducing risk. However, it likely will not get it right 100% of the time so there is a slight inconvenience.
Conclusion and Recommendations:
Artificial intelligence (AI) remains a leading catalyst for digital transformation in tech automation, identity and access management (IAM), big data analytics, technology orchestration, and collaboration tools. AI based quantum computing serves to bolster encryption when old methods are replaced. All of the government actions to incubate ethics in AI are a good start and the NIST AI Risk Management Framework (AI RMF) 1.0 is long overdue. It will likely be tweaked based on private sector feedback. However, adding the DOD five principles for the ethical development of AI to the NIST AI RMF could derive better synergies. This approach should be used by the private sector and academia in customized ways. AI product ethical deviations should be thought of as quality control and compliance issues and remediated immediately.
Organizations should consider forming an AI governance committee to make sure this unique risk is not overlooked or overly merged with traditional web / IT risk. ChatGPT is a good encyclopedia and a cool Boolean search tool, yet it got some things wrong about quantum computing in this article for which we cited and corrected. The Simplified AI text to graphics generator was cool and useful but it needed some manual edits as well. Both of these generative AI tools will likely get better with time.
Artificial intelligence (AI) will spur many mobile malware and ransomware variants faster than Apple and Google can block them. This in conjunction with the fact that people more often have no mobile antivirus on their smart phone even if they have it on their personal and work computers, and a culture of happy go lucky application downloading makes it all the worse. As a result, more breaches should be expected via smart phones / watches / eyeglasses from AI enabled threats.
Therefore, education and awareness around the review and removal of non-essential mobile applications is a top priority. Especially for mobile devices used separately or jointly for work purposes. Containerization is required via a mobile device management (MDM) tool such as JAMF, Hexnode, VMWare, or Citrix Endpoint Management. A bring your own device (BYOD) policy needs to be written, followed, and updated often informed by need-to-know and role-based access (RBAC) principles. This requires a better understanding of geolocation, QR code scanning, couponing, digital signage, in-text ads, micropayments, Bluetooth, geofencing, e-readers, HTML5, etc. Organizations should consider forming a mobile ecosystem security committee to make sure this unique risk is not overlooked or overly merged with traditional web / IT risk. Mapping the mobile ecosystem components in detail is a must including the AI touch points.
The growth and acceptability of mass work from home (WFH) combined with the mass resignation / gig economy remind employers that great pay and culture alone are not enough to keep top talent. At this point AI only takes away some simple jobs but creates AI support jobs, yet the percents of this are not clear this early. Signing bonuses and personalized treatment are likely needed for those with top talent. We no longer have the same office and thus less badge access is needed. Single sign-on (SSO) will likely expand to personal devices (BYOD) and smart phones / watches / eyeglasses. Geolocation-based authentication is here to stay with double biometrics, likely fingerprint, eye scan, typing patterns, and facial recognition. The security perimeter remains more defined by data analytics than physical / digital boundaries, and we should dashboard this with machine learning tools as the use cases evolve.
Cloud infrastructure will continue to grow fast creating perimeter and compliance complexity / fog. Organizations should preconfigure artificial intelligence (AI) based cloud-scale options and spend more on cloud-trained staff. They should also make sure that they are selecting more than two or three cloud providers, all separate from one another. This helps staff get cross-trained on different cloud platforms and plug in applications. It also mitigates risk and makes vendors bid more competitively. There is huge potential for AI synergies with Cloud Security Posture Management (CSPM) tools, and threat response tools – experimentation will likely yield future dividends. Organization should not be passive and stuck in old paradigms. The older generations should seek to learn from the younger generations without bias. Also, comprehensive logging is a must for AI tools.
In regard to cryptocurrency, non-fungible tokens (NFTs), initial coin offerings (ICOs), and related exchanges – artificial intelligence (AI) will be used by crypto scammers and those seeking to launder money. Watch out for scammers who make big claims without details, no white papers or filings, or explanations at all. No matter what the investment, find out how it works and ask questions about where your money is going. Honest investment managers and advisors want to share that information and will back it up with details in many documents and filings [40]. Moreover, better blacklisting by crypto exchanges and banks is needed to stop these illicit transactions erroring far on the side of compliance. This requires us to pay more attention to knowing and monitoring our own social media baselines – emerging AI data analytics can help here. If you are for and use crypto mixer and / or splitter services then you run the risk of having your digital assets mixed with dirty digital assets, you have high fees, you have zero customer service, no regulatory protection, no decent Terms of Service and / or Privacy Policy if any, and you have no guarantee that it will even work the way you think it will.
As security professionals, we are patriots and defenders of wherever we live and work. We need to know what our social media baseline is across platforms. IT and security professionals need to realize that alleviating disinformation is about security before politics. We should not be afraid to talk about this because if we are, then our organizations will stay weak and outdated and we will be plied by the same artificial intelligence (AI) generated political bias that we fear confronting. More social media training is needed as many security professionals still think it is mostly an external marketing thing.
It’s best to assume AI tools are reading all social media posts and all other available articles, including this article which we entered into ChatGPT for feedback. It was slightly helpful pointing out other considerations. Public-to-private partnerships (InfraGard) need to improve and application to application permissions need to be more scrutinized. Everyone does not need to be a journalist, but everyone can have the common sense to identify AI / malware-inspired fake news. We must report undue AI bias in big tech from an IT, compliance, media, and a security perspective. We must also resist the temptation to jump on the AI hype bandwagon but rather should evaluate each tool and use case based on the real-world business outcomes for the foreseeable future.
About the Authors:
Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist / researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.
Matthew Versaggi is a senior leader in artificial intelligence with large company healthcare experience who has seen hundreds of use-cases. He is a distinguished engineer, built an organization’s “College of Artificial Intelligence”, introduced and matured both cognitive AI technology and quantum computing, has been awarded multiple patents, is an experienced public speaker, entrepreneur, strategist and mentor, and has international business experience. He has an MBA in international business and economics and a MS in artificial intelligence from DePaul University, has a BS in finance and MIS and a BA in computer science from Alfred University. Lastly, he has nearly a dozen professional certificates in AI that are split between the AI, technology, and business strategy.
[37] Muneer, Salman Muneer, Muhammad Bux Alvi, and Amina Farrakh; “Cyber Security Event Detection Using Machine Learning Technique.” International Journal of Computational and Innovative Sciences. Vol. 2, no (2): pg. 42-46. 2023: https://ijcis.com/index.php/IJCIS/article/view/65.
[38] Azhar, Ishaq; “Identity Management Capability Powered by Artificial Intelligence to Transform the Way User Access Privileges Are Managed, Monitored and Controlled.” International Journal of Creative Research Thoughts (IJCRT), ISSN:2320-2882, Vol. 9, Issue 1: pg. 4719-4723. January 2021: https://ssrn.com/abstract=3905119
Backing up data is one of the best things you can do to improve your response to ransomware, a data breach, an infrastructure failure, or another type of cyber-attack. Without a good comprehensive backup method that works and is tested, you likely will not be able to recover from where you left off thereby harming your business and customers.
The 3-2-1 backup method requires saving multiple copies of data on different device types and in different locations. More specifically, the 3-2-1 method follows these three requirements:
3 Copies of Data: Have three copies of data—the original, and at least two copies.
2 Different Media Types: Use two different media types for storage. This can help reduce any impact that may impact one specific storage media type more than the other.
1 Copy Offsite: Keep one copy offsite to prevent the possibility of data loss due to a site-specific failure.
Here are some pointers to make your backup more effective:
Select the right data to back up: Critical data includes word processing documents, electronic spreadsheets, databases, financial files, human resources files, and accounts receivable/payable files. Not everything is worth backing up as it’s a waste of space. For example, data that is 8 years old with no business use is not worth backing up.
Backup on a schedule: Backup data automatically on a repeatable schedule, if possible, bi-weekly, weekly, or even daily if needed. Pick a day or time range when the backup will run, say Thursdays at 10:00 p.m. CST (when most users are not working.
Have backup test plans and follow them: Your backup plan must be written down in a clear and detailed way describing the backup process, roles, interconnections, and milestones which can gauge if it’s working, as well as the service time to recovery expected. Then of course test the backup at least every six months or after a key infrastructure change happens.
Automate backups: Use software automation to execute the backups to save user time, and to reduce the risk of human error.
About the Author:
Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. Over 17 years he has held progressive roles at many banks, insurance companies, retailers, healthcare orgs, and even governments including being a member of the Federal Reserve Secure Payment Task Force. Organizations relish in his ability to bridge gaps and flesh out hidden risk management solutions while at the same time improving processes. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. As a futurist, his writings on digital currency, the Target data breach, and Google combining Google + video chat with Google Hangouts video chat have been validated by many. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire.
Every year I like to research and commentate on the most impactful security technology and business happenings from the prior year. This year is unique since the pandemic is partly the catalyst for most of these trends in conjunction with it being a presidential election year like no other. All these trends are likely to significantly impact small businesses, government, education, high tech, and large enterprise in big and small ways.
Fig 1. Stock Mashup, 2020.
1) Disinformation Efforts Accelerate Challenging Data and Culture:
Advancements in communications technologies, the growth of large social media networks, and the “appification” of everything increases the ease and capability of disinformation. Disinformation is defined as incorrect information intended to mislead or disrupt, especially propaganda issued by a government organization to a rival power or the media. For example, governments creating digital hate mobs to smear key activists or journalists, suppress dissent, undermine political opponents, spread lies, and control public opinion (Shelly Banjo, Bloomberg, 05/18/2019). Today’s disinformation war is largely digital via platforms like Facebook, Twitter, iTunes, WhatsApp, Yelp, and Instagram. Yet even state-sponsored and private news organizations are increasingly the weapon of choice creating a false sense of validity. Undeniably, the battlefield is wherever many followers reside.
Bots and botnets are often behind the spread of disinformation, complicating efforts to trace it and to stop it. Further complicating this phenomenon is the number of app-to-app permissions. For example, the CNN and Twitter apps having permission to post to Facebook and then Facebook having permission to post to WordPress and then WordPress posting on Reddit, or any combination like this. Not only does this make it hard to identify the chain of custody and source, but it also weakens privacy and security due to the many authentication permissions.
We all know that false news spreads faster than real news most of the time, largely because it is sensationalized. Since disinformation draws in viewers, which drives clicks and ad revenues – it is a money-making machine. If you can control what’s trending in the news and/or social media, it impacts how many people will believe it. This in turn impacts how many people will act on that belief, good or bad. This is exacerbated when combined with human bias or irrational emotion. For example, in late 2020 there were many cases of fake COVID-19 vaccines being offered in response to human fear (FDA, 12/22/2020). This negatively impacts culture by setting a misguided example of what is acceptable.
There were several widely reported cases of political disinformation in 2020 including misleading texts, e-mails, mailers, and robocalls designed to confuse American voters amid the already stressful pandemic. Like a narcissist’s triangulation trap these disinformation bursts riled political opponents on both sides in all states creating miscommunication, ad hominin attacks, and even derailed careers (PBS, The Hinkley Report, 11/24/20). Moreover, huge swaths of confused voters aligned more with speculation and emotion/hype than unbiased facts. This dirtied the data in terms of the election process and only begs the question of which parts of the election information process are broken. This normalizes petty policy fights, emotional reasoning, lack of unbiased intellectualism – negatively impacting western culture. All to the threat actor’s delight. Increased public to private partnerships, more educational rigor, and enhanced privacy protections for election and voter data are needed to combat this disinformation.
2) Stalkerware Grows and Evolves Reducing Mobile Privacy:
The increased use of mobile devices in conjunction with the pandemic induced work from home (WFH) growth has produced more stalkerware. According to one report, there was a 51% increase in Android spyware and stalkerware from March through June, vs the first two months of the year (Avast, Security Boulevard, 12/02/20); and this is likely to be above a 100% increase when all data is tabulated for the end of 2020. Inspired by covert law enforcement investigation tactics, this malware variant can be secretly installed on a victim’s phone hiding as a seemingly harmless app. It is not that different from employee monitoring software. However, unlike employee monitoring software, which can easily be confused with this malware; stalkerware is typically installed by fake friends, jealous spouses and partners, ex-partners, and even concerned relatives. If successfully installed, it relays private information back to the attacker including the victim’s photos, location, texts, web browsing history, call records and more. This is where the privacy violation and abuse and/or fraud can start yet it is hard to identify in the blur of too many mobile apps.
3) Identity & Access Management (IAM) Scrutiny Drives Zero Trust:
The pandemic has pushed most organizations to amass WFH posture. Generally, this improves productivity making it likely to become the new norm, albeit with new rules and controls. To support this, 51% of business leaders are speeding up the deployment of Zero Trust capabilities (Andrew Conway, Microsoft, 08/19/20). Zero trust moves organizations to a need to know only access mindset with inherent deny rules, all the while assuming you are compromised. This infers single sign-on at the personal device level and improved multifactor authentication. It also infers better role-based access controls (RBAC), improved need to know policies, group membership reviews, and state of the art PAM tools for the next year.
4) Security Perimeter is Now More Defined by Data Analytics than Physical/Digital Boundaries:
This increased WFH posture blurs the security perimeter both physically and digitally. New IP addresses, internet volume, routing, geolocation, and virtual machines (VMs) exacerbate this blur. This raises the criticality of good data analytics and dashboarding to define the digital boundaries in real-time. Therefore, prior audits, security controls, and policies may be ineffective. For instance, empty corporate offices are the physical byproduct of mass WFH, requiring organizations to set default disable for badge access. Extra security in or near server rooms is also required. The pandemic has also made vendor interactions more digital, so digital vendor connection points should be reduced and monitored in real-time, and the related exception policies should be revaluated.
5) Data Governance Gets Sloppy Amid Agility:
Mass WFH has increased agility and driven sloppy data governance. For example, one week after the CARES Act was passed banks were asked to accept Paycheck Protection Program (PPP) loan applications. Many banks were unprepared to deal with the flood of data from digital applications, financial histories, and related docs, and were not able to process them in an efficient way. Moreover, the easing of regulatory red tape at hospitals/clinics, although well-intentioned to make emergency response faster. It created sloppy data governance, as well. The irony of this is that regulators are unlikely to give either of these industries a break, nor will civil attorneys hungry for any hangnail claim.
6) The Divide Between Good and Bad Cloud Security Grows:
The pandemic has reminded us that there are two camps with cloud security. Those who have a planned option for bigger cloud-scale and those that are burning their feet in a hasty rush to get there. In the first option, the infrastructure is preconfigured and hardened, rates are locked, and there is less complexity, all of which improves compliance and gives tech risk leaders more peace of mind. In the latter, the infrastructure is less clear, rates are not predetermined, compliance and integration are confusing at best, and costs run high – all of which could set such poorly configured cloud infrastructures up for future disasters.
7) Phishing Attacks Grow Exponentially and Get Craftier:
The pandemic has caused a hurricane of phishing emails that have been hard to keep up with. According to KnowBe4 and Security Magazine, there has been a 6,000% increase in phishing e-mails since the start of the pandemic (Stu Sjouwerman, KnowBe4, 07/13/20 & Security Magazine, 07/22/20). Many of these e-mails have improved their approach and design, appearing more professional and appealing to our emotions by using tags concerning COVID relief, data, and vaccines. Ransomware increased 72% year over year (Security Magazine, 07/22/20). With many new complexities in the mobile ecosystem and exponential app growth, it is not surprising that mobile vulnerabilities also increased by 50% (Security Magazine, 07/22/20).
Take-Aways:
COVID-19 is the catalyst for digital transformation in tech automation, IAM, big data, collaboration tools, and AI. We no longer have the same office and thus less badge access is needed. Single sign-on (SSO) will expand to personal devices and smartphones/watches. Geolocation based authentication is here to stay with double biometrics likely. The security perimeter is now more defined by data analytics than physical/digital boundaries, and we should to dashboard this with machine learning and AI tools.
Education and awareness around the review and removal of non-essential mobile apps is a top priority. Especially for mobile devices used separately or jointly for work purposes. This requires a better understanding of geolocation, QR code scanning, couponing, digital signage, in-text ads, micropayments, Bluetooth, geofencing, e-readers, HTML5, etc. A bring your own device (BYOD) policy needs to be written, followed and updated often – embracing need to know and role-based access (RBAC) principles. Organizations should consider forming a mobile ecosystem security committee to make sure this unique risk is not overlooked or overly merged with traditional web/IT risk. Mapping the mobile ecosystem components in detail is a must.
Cloud infra will continue to grow fast creating perimeter and compliance complexity/fog.Organizations should preconfigure cloud scale options and spend more on cloud trained staff. They should also make sure that they are selecting more than two or three cloud providers, all separate from one another. This helps staff get cross-trained on different cloud platforms and add-ons. It also mitigates risk and makes vendors bid more competitively. IT and security professionals need to realize that alleviating disinformation is about security before politics. We should not be afraid to talk about it because if we are then our organizations will stay weak and insecure and we will be plied by the same political bias that we fear confronting. As security professionals, we are patriots and defenders of wherever we live and work. We need to know what our social media baseline is across platforms. More social media training is needed as many security professionals still think it is mostly an external marketing thing. Public-to-private partnerships need to improve and app to app permissions need to be scrutinized. Enhanced privacy protections for election and voter data are needed. Everyone does not need to be a journalist, but everyone can have the common sense to identify malware inspired fake news. We must report undue bias in big tech from an IT, compliance, media, and a security perspective.
About the Author:
Jeremy Swenson is a disruptive thinking security entrepreneur and senior management tech risk consultant. Over 15 years he has held progressive roles at many banks, insurance companies, retailers, healthcare orgs, and even governments. Organizations relish in his ability to bridge gaps and flesh out hidden risk management solutions while at the same time improving processes. He is also a frequent speaker, published writer, and even does some pro bono consulting in these areas. He holds an MBA from St Mary’s University of MN and MSST (Master of Science in Security Technologies) degree from the University of Minnesota.
Featuring the esteemed technology and risk thought leaders Donald Malloy and Nathaniel Engelsen — this episode covers threat modeling methodologies STRIDE, Attack Tree, VAST, and PASTA. Specifically, how to apply them with limited budgets. It also discusses the complex intersection of how to derive ROI on threat modeling with compliance and insurance considerations. We then cover IAM best practices including group and role level policy and control best practices. Lastly, we hear a few great examples of key CISO risk management must-dos at the big and small company levels.
Fig. 2. Pasta Threat Modeling Steps (Nataliya Shevchenko, CMU, 12/03/2018).
Donald Malloy has more than 25 years of experience in the security and payment industry and is currently a security technology consultant advising many companies. Malloy was responsible for developing the online authentication product line while at NagraID Security (Oberthur) and prior to that he was Business Development and Marketing Manager for Secure Smart Card ICs for both Philips Semiconductors (NXP) and Infineon Technologies. Malloy originally comes from Boston where he was educated and has M.S. level degrees in Organic Chemistry and an M.B.A. in Marketing. Presently he is the Chairman of The Initiative for Open Authentication (OATH) and is a solution provider with DualAuth. OATH is an industry alliance that has changed the authentication market from proprietary systems to an open-source standard-based architecture promoting ubiquitous strong authentication used by most companies today. DualAuth is a global leader in trusted security with two-factor authentication include auto passwords. He resides in southern California and in his spare time he enjoys hiking, kayaking, and traveling around this beautiful world.
Nathaniel Engelsen is a technology executive, agilest, writer, and speaker on topics including DevOps, agile team transformation, and cloud infrastructure & security. Over the past 20 years he has worked for startups, small and mid-size organizations, and $1B+ enterprises in industries as varied as consulting, gaming, healthcare, retail, transportation logistics, and digital marketing. Nathaniel’s current security venture is Callback Security, providing dynamic access control mechanisms that allow companies to turn off well-known or static remote and database access routes. Nathaniel has a bachelor’s in Management Information Systems from Rowan University and an MBA from the University of Minnesota, where he was a Carlson Scholar. He also holds a CISSP.
Each year we like to review and commentate on the most impactful technology and business concepts that are likely to significantly impact the coming year. Although this list is incomplete, these are three items worth dissecting.
3. The Hyper Expansion of Cloud Services Will Spur Competition and Innovation:
Cloud computing is a utility that relies on shared resources to achieve a coherent economy of scales benefit – with high-powered services that are rapidly provisioned with minimal management effort via the internet (Fig. 1). It presently consists of these main areas: SaaS (software as a service), PaaS (platform as a service), and IaaS (infrastructure as a service). It is typically used for technology tool diversification, redundancy, disaster recovery, storage, cost reduction, high powered computer tests and models, and even as a globalization strategy. Cloud computing generated about $127 billion in 2017 and is projected to hit $500 billion by the year 2020. At this rate, we can expect many more product startups and consulting services firms to grow and consolidate in 2018 as they are forced to be more competitive thus bringing costs down.
The line between local and cloud computing is blurry because the cloud is part of almost all computer functions. Consumer-facing examples include: Microsoft OneDrive, Google Drive, GMAIL, and the iPhone infrastructure. Apple’s cloud services are primarily used for online storage, backups and synchronization of your mail, calendar, and contacts – all the data is available on iOS, Mac OS, and even on Windows devices via the iCloud control panel.
Fig. 1. Linked Use Cases for Cloud Computing.
More business sided examples include: Salesforce, SAP, IBM CRM, Oracle, Workday, VMware, Service Now, and Amazon Web Services. Amazon Cloud Drive offers storage for music, images purchased through Amazon Prime, as well as corporate level storages that extends services for anything digital. Amazon’s widespread adoption of hardware virtualization, service-oriented architecture with automated utilization will sustain the growth of cloud computing. With the cloud, companies of all sizes can get their applications up and running faster with less IT management involved and with much lower costs. Thus, they can focus on their core-business and market competition.
The big question for 2018 is what new services and twists will cloud computing offer the market and how will it change our lives. In tackling this question, we should try to imagine the unimaginable. Perhaps in 2018 the cloud will be the platform where combined supercomputers can use quantum computing and machine learning to make key breakthroughs in aerospace engineering and medical science. Additionally, virtual reality as a service sounds like the next big thing; we will coin it (VRAAS).
2. The Reversal of Net Neutrality is Awful for Privacy, Democracy, and Economics:
Before it was rolled back, net neutrality required service providers to treat all internet traffic equally. This is morally and logically correct because a free and open internet is just as important as freedom of the press, freedom of speech, and the free market concept. The internet should be able to enable startups, big companies, opposing media outlets, and legitimate governments in the same way and without favor. The internet is like air to all these sects of the economy and to the world.
Rolling back net neutrality is something the U.S. will regret in coming months. Although the implications of it are not fully known, it may mean that fewer data centers will be built in the U.S. and it may mean that smaller companies will be bullied out of business due to gamified imbalances of cost in internet bandwidth. Netflix and most tech companies dissented via social media resulting in viral support (Fig 2).
Fig 2. Viral Netflix Opposition to Rolling Back Net Neutrality.
Lastly, it exacerbates the gap between the rich and the poor and it enables the government to have a stronger hand in influencing the tenor of news media, social norms, and worst of all political bias. As fiber optic internet connectivity expands, and innovative companies like Google, Twitter, and Facebook turn into hybrid news sources, a fully free internet is the best thing to expose their own excesses, biases, and that there are legitimate conflicting viewpoints that can be easily found.
1. Amazon’s Purchase of Whole Foods Tells Us the Gap Between Retailer and Tech Service Company is Closing:
For quite a long time I have been a fan of Amazon because they were anti-retail establishment. In fact, in Amazon’s early days, it was the retail establishment that laughed at them suggesting they would flounder and fail. “How dare you sell used books by mail out of a garage”. Yet their business model has turned more into a technology and logistics platform than a product-oriented one. Many large and small retailers and companies of all types – employ their selling, shipping, and infrastructure platform to the degree that they are, in essence, married to Amazon. Magazine Business Insider said, “The most important deal of the year was Amazon’s $13.7 billion-dollar acquisition of Whole Foods. In one swoop, Amazon totally disrupted groceries, retail delivery, and even the enterprise IT market” (Weinberger, 12/17/17). The basis for this acquisition was that grocery delivery is underserved and has huge potential in the U.S. as the population grows, less people own cars, and people value not wasting time walking around a retail store so much (getting socialized to a new level of service) (Fig 3).
Fig. 3. How Amazon Can Use Whole Foods to Serve High Potential Grocery Delivery.
Mr. Swenson and Mr. Mebrahtu meet in graduate business school where they collaborated on global business projects concerning leadership, team dynamics, and strategic innovation. They have had many consulting stints at leading technology companies and presently work together indirectly at Optum / UHG. Mr. Swenson is a Sr. consultant, writer, and speaker in: business analysis, project management, cyber-security, process improvement, leadership, and abstract thinking. Mr. Mebrahtu is a Sr. developer, database consultant, agile specialist, application design and test consultant, and Sr. quality manager of database development.