Secure Cloud Best Practices — A Collaborative Endeavor for Business Resilience

Fig. 1. Cloud Shared Security Responsibility Model, Microsoft, 2024.

#CloudSecurity #CyberSecurity #SharedResponsibility #IAM #DataEncryption #PolicyCompliance #EmployeeTraining #EndpointSecurity #RiskMitigation #DataProtection #BusinessResilience #InfoSec #SecurityAwareness #CloudMigration #CIOInsights

In today’s digitally interconnected world, the cloud has emerged as a cornerstone of modern business operations, offering scalability, flexibility, and efficiency like never before. Leading vendors like Amazon Web Services (AWS), Microsoft, Oracle, Dell, and Oracle offer public, private, and hybrid cloud formats. However, as businesses increasingly migrate their operations to the cloud, ensuring robust security measures becomes paramount. Here, we delve into seven essential strategies for securing the cloud effectively, emphasizing collaboration between C-suite leaders and IT stakeholders.

1)      Understanding the Cloud-Shared Responsibility Model:

The first step in securing the cloud is grasping the nuances of the shared responsibility model (Fig. 1.). While cloud providers manage the security of the infrastructure platform, customers are responsible for securing their data and applications, including who gets access to them and at what level (Fig 1.). This necessitates a clear delineation of responsibilities, ensuring no security gaps exist. CIOs and CISOs must thoroughly educate themselves and their teams on this model to make informed security decisions.

2)      Asking Detailed Security Questions:

It is imperative to engage cloud providers in detailed discussions regarding security measures, digging far deeper than boilerplate questions and checkbox forms. C-suite executives should inquire about specific security protocols, compliance certifications, incident response procedures, and data protection mechanisms. Organizations can mitigate risks and build trust in their cloud ecosystem by seeking transparency and understanding the provider’s security posture.

3)      Implementing IAM Solutions:

Identity and access management (IAM) lies at the core of cloud security. Robust IAM solutions enable organizations to authenticate, authorize, and manage user access effectively. CIOs and CISOs should invest in IAM platforms equipped with features like multi-factor authentication, role-based access control, least privilege, and privileged access management (PAM) governance. By enforcing the principle of least privilege, businesses can minimize the risk of unauthorized access and insider threats.

4)      Establishing Modern Cloud Security Policies:

A proactive approach to security entails the formulation of comprehensive cloud security policies aligned with industry best practices and regulatory requirements. Business leaders must collaborate with security professionals to develop policies covering data classification, incident response, encryption standards, and employee responsibilities. Regularly updating and reviewing these policies are essential to adapting to evolving threats and technologies — can be country specific.

5)      Encrypting Data in Motion and at Rest:

Encryption serves as a critical safeguard for data confidentiality and integrity in the cloud. Organizations should employ robust encryption mechanisms to protect data both in transit and at rest. Utilizing encryption protocols such as TLS for network communications and AES for data storage adds an extra layer of defense against unauthorized access. Additionally, implementing reliable backup solutions ensures data resilience in the event of breaches or disasters. Having all key files backed up via the 3-2-1 rule — three copies of files in two different media forms with one offsite — thus reducing ransomware attack damage.

6)      Educating Staff Regularly:

Human error remains one of the most significant vulnerabilities in cloud security. Therefore, ongoing employee education and awareness initiatives are indispensable. C-suite leaders must prioritize security training programs to cultivate a security-conscious culture across the organization. By educating staff on security best practices, threat awareness, and incident response protocols, businesses can fortify their defense against social engineering attacks and insider threats. Importantly, this education is far more effective when interactive and gamified to ensure participation and sustained learning outcomes.

7)      Mapping and Securing Endpoints:

Endpoints serve as crucial entry points for cyber threats targeting cloud environments. CIOs and CISOs should conduct thorough assessments to identify and secure all endpoints accessing the cloud infrastructure. Visually mapping endpoints is the first step to confirm how many, what type, and where they actually are at present — this can and does change. Implementing endpoint protection solutions, enforcing device management policies, and promptly deploying security patches are essential to mitigate endpoint vulnerabilities. Furthermore, embracing technologies like zero-trust architecture enhances endpoint security by continuously verifying user identities and device integrity.

In conclusion, securing the cloud demands a multifaceted approach encompassing collaboration, diligence, vendor communication and partnership, and innovation. By embracing the principles outlined above, organizations can strengthen their cloud security posture, mitigate risks, and foster a resilient business environment. C-suite leaders, in conjunction with IT professionals, must champion these strategies to navigate the evolving threat landscape and safeguard the future of their enterprises.

About the Author:

Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

AT&T Faces Massive Data Breach Impacting 73 Million and Negligence Lawsuits

Fig 1. AT&T Data Breach Infographic, WLBT3, 2024.

After weeks of denials, AT&T Inc. (NYSE:T), a leading player in the telecommunications sector, has recently unveiled a substantial data breach originating from 2021, leading to the compromise of sensitive information belonging to 73 million users [1]. This data breach has since surfaced on the dark web, exposing a trove of personal data including Social Security numbers, email addresses, phone numbers, and dates of birth, impacting both current and past account holders. The compromised information encompasses names, addresses, phone numbers, and for numerous individuals, highly sensitive data such as Social Security numbers, dates of birth, and AT&T passcodes.

How can you determine if you were impacted by the AT&T data breach? Firstly, ask yourself if you ever were a customer, and do not rely solely on AT&T to notify you. By utilizing services like Have I Been Pwned, you can ascertain if your data has been compromised. Additionally, Google’s Password Checkup tool can notify you if your account details are exposed, especially if you store password information in a Google account. For enhanced security, the premium edition of Bitwarden, a top-rated recommended password manager, offers the capability to scan for compromised passwords across the internet.

One prevalent issue concerning data breaches is the tendency for individuals to overlook safeguarding their data until it’s too late. It’s a common scenario – we often don’t anticipate our personal information falling into the hands of hackers who then sell it to malicious entities online. Regrettably, given the frequency and magnitude of cyber-attacks, the likelihood of your data being exposed has shifted from an “if” to a “when” scenario.

Given this reality, it’s imperative to adopt measures to safeguard your identity and data online, including [2]:

  1. Implementing multi-factor authentication – a crucial step in thwarting hackers’ attempts to infiltrate your accounts, even if your email address is publicly available.
  2. Avoiding password reuse and promptly changing passwords if they are compromised in a data breach – this practice ensures that even if your login credentials are exposed, hackers cannot infiltrate other accounts you utilize, including the one that has experienced a breach.
  3. Investing in identity protection services, either as standalone solutions or as part of comprehensive internet security suites – identity protection software can actively monitor the web for data breaches involving you, enabling you to take proactive measures to safeguard your identity.

AT&T defines a customer’s passcode as a numeric Personal Identification Number (PIN), typically consisting of four digits. Distinguishing it from a password, a passcode is necessary for finalizing an AT&T installation, conducting personal account activities over the phone, or reaching out to technical support, according to AT&T.

How to reset your AT&T passcode:

AT&T has taken steps to reset passcodes for active accounts affected by the data breach. However, as a precautionary measure, AT&T advises users who haven’t altered their passcodes within the last year to do so. Below are the steps to change your AT&T passcode:

  1. Navigate to your myAT&T Profile.
  2. Sign in when prompted. (If additional security measures are in place and sign-in isn’t possible, AT&T suggests opting for “Get a new passcode.”)
  3. Locate “My linked accounts” and select “Edit” for the passcode you wish to update.
  4. Follow the provided prompts to complete the process.

Here is AT&T’s official statement on the matter from 03/03/24 [3]:

“Based on our preliminary analysis, the data set appears to be from 2019 or earlier, impacting approximately 7.6 million current AT&T account holders and approximately 65.4 million former account holders. Currently, AT&T does not have evidence of unauthorized access to its systems resulting in exfiltration of the data set. The company is communicating proactively with those impacted and will be offering credit monitoring at our expense where applicable. We encourage current and former customers with questions to visit http://www.att.com/accountsafety for more information.”

The hackers behind this, allegedly ShiningHacker, endeavored to profit from the pilfered data by listing it for sale on the RaidForums data theft forum, initiating the bidding at $200,000 and entertaining additional offers in increments of $30,000 [4]. Moreover, they demonstrated readiness to promptly sell the data for $1 million, highlighting the gravity and boldness of the cyber offense.

Not surprisingly, AT&T is currently confronting numerous class-action lawsuits subsequent to the company’s acknowledgment of this data breach, which compromised the sensitive information of 73 million existing and former customers [5]. Among the ten lawsuits filed, one is being handled by Morgan & Morgan, representing plaintiff Patricia Dean and individuals in similar circumstances.

The lawsuit levels allegations of negligence, breach of implied contract, and unjust enrichment against AT&T, contending that the company’s deficient security measures and failure to promptly provide adequate notification about the data breach exposed customers to significant risks, including identity theft and various forms of fraud. It seeks compensatory damages, restitution, injunctive relief, enhancements to AT&T’s data security protocols, future audits, credit monitoring services funded by the company, and a trial by jury [6].


About the Author:

Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

References:


[1] AT&T. “AT&T Addresses Recent Data Set Released on the Dark Web.” 03/30/24: https://about.att.com/story/2024/addressing-data-set-released-on-dark-web.html

[2] Colby, Clifford, Combs, Mary-Elisabeth; “Data From 73 Million AT&T Accounts Stolen: How You Can Protect Yourself.” CNET. 04/02/24: https://www.cnet.com/tech/mobile/data-from-73-million-at-t-accounts-stolen-how-you-can-protect-yourself/

[3] AT&T. “AT&T Addresses Recent Data Set Released on the Dark Web.” 03/30/24: https://about.att.com/story/2024/addressing-data-set-released-on-dark-web.html

[4] Naysmith, Caleb. “73 Million AT&T Users’ Data Leaked As Hacker Said, ‘I Don’t Care If They Don’t Admit. I’m Just Selling’ Auctioned At Starting Price Of $200K”. https://finance.yahoo.com/news/73-million-t-users-data-173015617.html

[5] Kan, Michael. “AT&T Faces Class-Action Lawsuit Over Leak of Data on 73M Customers.” PC Mag. 04/02/24: https://www.pcmag.com/news/att-faces-class-action-lawsuit-over-leak-of-data-on-73m-customers

[6] Kan, Michael. “AT&T Faces Class-Action Lawsuit Over Leak of Data on 73M Customers.” PC Mag. 04/02/24: https://www.pcmag.com/news/att-faces-class-action-lawsuit-over-leak-of-data-on-73m-customers

NIST Cybersecurity Framework (CSF) New Version 2.0 Summary

Fig. 1. NIST CSF 2.0 Stepper, NIST, 2024.

#cyberresilience #cybersecurity #generativeai #cyberthreats #enterprisearchitecture #CIO #CTO #riskmanagement #bias #governance #RBAC #CybersecurityFramework #Cybersecurity #NISTCSF #RiskManagement #DigitalResilience #nist #nistframework #cyberawareness

The National Institute of Standards and Technology (NIST) has updated its widely used Cybersecurity Framework (CSF) — a free respected landmark guidance document for reducing cybersecurity risk. However, it’s important to note that most of the framework core has remained the same. Here are the core components the security community knows:

Govern (GV): Sets forth the strategic path and guidelines for managing cybersecurity risks, ensuring harmony with business goals and adherence to legal requirements and standards. This is the newest addition which was inferred before but is specifically illustrated to touch every aspect of the framework. It seeks to establish and monitor your company’s cybersecurity risk management strategy, expectations, and policy.

1.      Identify (ID): Entails cultivating a comprehensive organizational comprehension of managing cybersecurity risks to systems, assets, data, and capabilities.

2.      Protect (PR): Concentrates on deploying suitable measures to guarantee the provision of vital services.

3.      Detect (DE): Specifies the actions for recognizing the onset of a cybersecurity incident.

4.      Respond (RS): Outlines the actions to take in the event of a cybersecurity incident.

5.      Recover (RC): Focuses on restoring capabilities or services that were impaired due to a cybersecurity incident.

The new 2.0 edition is structured for all audiences, industry sectors, and organization types, from the smallest startups and nonprofits to the largest corporations and government departments — regardless of their level of cybersecurity preparedness and complexity.

Fig. 2. NIST CSF 2.0 Function Breakdown, NIST, 2024.

Here are some key updates:

Emphasis is placed on the framework’s expanded scope, extending beyond critical infrastructure to encompass all organizations. Importantly, it better incorporates and expands upon supply chain risk management processes. It also introduces a new focus on governance, highlighting cybersecurity as a critical enterprise risk with many dependencies. This is critically important with the emergence of artificial intelligence.

To make it easier for a wide variety of organizations to implement the CSF 2.0, NIST has developed quick-start guides customized for various audiences, along with case studies showcasing successful implementations, and a searchable catalog of references, all aimed at facilitating the adoption of CSF 2.0 by diverse organizations.

The CSF 2.0 is aligned with the National Cybersecurity Strategy and includes a suite of resources to adapt to evolving cybersecurity needs, emphasizing a comprehensive approach to managing cybersecurity risk. New adopters can benefit from implementation examples and quick-start guides tailored to specific user types, facilitating easier integration into their cybersecurity practices. The CSF 2.0 Reference Tool simplifies implementation, enabling users to access, search, and export core guidance data in user-friendly and machine-readable formats. A searchable catalog of references allows organizations to cross-reference their actions with the CSF, linking to over 50 other cybersecurity documents – facilitating comprehensive risk management. The Cybersecurity and Privacy Reference Tool (CPRT) contextualizes NIST resources with other popular references, facilitating communication across all levels of an organization.

NIST aims to continually enhance CSF resources based on community feedback, encouraging users to share their experiences to improve collective understanding and management of cybersecurity risk. The CSF’s international adoption is significant, with translations of previous versions into 13 languages. NIST expects CSF 2.0 to follow suit, further expanding its global reach. NIST’s collaboration with ISO/IEC aligns cybersecurity frameworks internationally, enabling organizations to utilize CSF functions in conjunction with ISO/IEC resources for comprehensive cybersecurity management.

Resources:

  1. NIST CSF 2.0 Fact Sheet.
  2. NIST CSF 2.0 PDF.
  3. NIST CSF 2.0 Reference Tool.
  4. NIST CSF 2.0 YouTube Breakdown.

About the Author:

Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist/researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

Key Artificial Intelligence (AI) Cyber-Tech Trends and What it Means for the Future.

Minneapolis –

#cryptonews #cyberrisk #techrisk #techinnovation #techyearinreview #infosec #musktwitter #disinformation #cio #ciso #cto #chatgpt #openai #airisk #iam #rbac #artificialintelligence #samaltman #aiethics #nistai #futurereadybusiness #futureofai

By Jeremy Swenson & Matthew Versaggi

Fig. 1. Quantum ChatGPT Growth Plus NIST AI Risk Management Framework Mashup [1], [2], [3].

Summary:

This year is unique since policy makers and business leaders grew concerned with artificial intelligence (AI) ethics, disinformation morphed, AI had hyper growth including connections to increased crypto money laundering via splitting / mixing. Impressively, AI cyber tools become more capable in the areas of zero-trust orchestration, cloud security posture management (CSPM), threat response via improved machine learning, quantum-safe cryptography ripened, authentication made real time monitoring advancements, while some hype remains. Moreover, the mass resignation / gig economy (remote work) remained a large part of the catalyst for all of these trends.

Introduction:

Every year we like to research and comment on the most impactful security technology and business happenings from the prior year. This year is unique since policy makers and business leaders grew concerned with artificial intelligence (AI) ethics [4], disinformation morphed, AI had hyper growth [5], crypto money laundering via splitting / mixing grew [6], AI cyber tools became more capable – while the mass resignation / gig economy remained a large part of the catalyst for all of these trends. By August 2023 ChatGPT reached 1.43 billion website visits per month and about 180.5 million registered users [7]. This even attracted many non-technical naysayers. Impressively, the platform was only nine months old then and just turned a year old in November [8]. These numbers for AI tools like ChatGPT are going to continue to grow in many sectors at exponential rates. As a result, the below trends and considerations are likely to significantly impact government, education, high-tech, startups, and large enterprises in big and small ways, albeit with some surprises.

1. The Complex Ethics of Artificial Intelligence (AI) Swarms Policy Makers and Industry Resulting in New Frameworks:

The ethical use of artificial intelligence (AI) as a conceptual and increasingly practical dilemma has gained a lot of media attention and research in the last few years by those in philosophy (ethics, privacy), politics (public policy), academia (concepts and principles), and economics (trade policy and patents) – all who have weighed in heavily. As a result, we find this space is beginning to mature. Sovereign nations (The USA, EU, and elsewhere globally) have developed and socialized ethical policies and frameworks [9], [10]. While major corporations motivated by profit are all devising their own ethical vehicles and structures – often taking a legalistic view first [11]. Moreover, The World Economic Forum (WEF) has weighed in on this matter in collaboration with PricewaterhouseCoopers (PWC) [12]. All of this contributes to the accelerated pace of maturity of this area in general. The result is the establishment of shared conceptual viewpoints, early-stage security frameworks, accepted policies, guidelines, and governance structures to support the evolution of artificial intelligence (AI) in ethical ways.

For example, the Department of Defense (DOD) has formally adopted five principles for the ethical development of artificial intelligence capabilities as follows [13]:

  1. Responsible
  2. Equitable
  3. Traceable
  4. Reliable
  5. Governable

Traceable and governable seem to be the most clear and important principles, while equitable and responsible seem gray at best and they could be deemphasized in a heightened war time context. The latter two echo the corporate social responsibility (CSR) efforts found more often in the private sector.

The WEF via PWC has issued its Nine AI Ethical Principles for organizations to follow [14], and The Office of the Director of National Intelligence (ODNI) has released their Framework for AI Ethics [15]. Importantly, The National Institute For Standards in Technology (NIST) has released their AI Risk Management Framework as outlined in Fig. 2. and 3. They also released a playbook to support its implementation and have hosted several working sessions discussing it with industry which we attended virtually [16]. It seems the mapping aspect could take you down many AI rabbit holes, some unforeseen – inferring complex risk. Mapping also impacts how you measure and manage. None of this is fully clear and much of it will change as ethical AI governance matures.

Fig. 2. NIST AI Risk Management Framework (AI RMF) 1.0 [17].

Fig. 3. NIST AI Risk Management Framework: Actors Across AI Lifecycle Stages (AI RMF) 1.0 [18].

The actors in Fig. 3. cover a wide swath of spaces where artificial intelligence (AI) plays, and appropriately so as AI is considered a GPT (general purpose technology) like electricity, rubber, and the like – where it can be applied ubiquitously in our lives [19]. This infers cognitive technology, digital reality, ambient experiences, autonomous vehicles and drones, quantum computing, distributed ledgers, and robotics to name a few. These were all prior to the emergence of generative AI on the scene which will likely put these vehicles to the test much earlier than expected. Yet all of these can be mapped across the AI lifecycle stages in Fig. 3. to clarify the activities, actors, dimensions, and if it gets to build, then more scrutiny will need to be applied.

Scrutiny can come in the form of DevSecOps but that is extremely hard to do with such exponentially massive AI code datasets required by the learning models, at least at this point. Moreover, we are not sure if any AI ethics framework does justice to quality assurance (QA) and secure coding best practices much at this point. However, the above two NIST figures at least clarify relationships, flows, inputs and outputs, but all of this will need to be greatly customized to an organization to have any teeth. We imagine those use cases will come out of future NIST working sessions with industry.

Lastly, the most crucial factor in AI ethics governance is what Fig. 3. calls “People and Planet”. This is because the people and planet can experience the negative aspects of AI in ways the designers did not imagine, and that feedback is valuable to product governance to prevent bigger AI disasters. For example, AI taking control of the air traffic control system and causing reroutes or accidents, or AI malware spreading faster than antivirus products can defend it creating a cyber pandemic. Thus, making sure bias is reduced and safety increased (DOD five AI principles) is key but certainly not easy or clear.

2. ChatGPT and Other Artificial Intelligence (AI) Tools Have Huge Security Risks:

It is fair to start off discussing the risks posed by ChatGPT and related tools to balance out all the positive feature coverage in the media and popular culture in recent months. First of all, with artificial intelligence (AI), every cyber threat actor has a new tool to better send spam, steal data, spread malware, build misinformation mills, grow botnets, launder cryptocurrency through shady exchanges [20], create fake profiles on multiple platforms, create fake romance chatbots, and to build the most complex self-replicating malware that will be akin to zero-day exploits much of the time.

One commentator described it this way in his well circulated LinkedIn article, “It can potentially be a formidable social engineering and phishing weapon where non-native speakers can create flawlessly written phishing emails. Also, it will be much simpler for all scammers to mimic their intended victim’s tone, word choice, and writing style, making it more difficult than ever for recipients to tell the difference between a genuine and fraudulent email” [21]. Think of MailChimp on steroids with a sophisticated AI team crafting millions and billions of phishing e-mails / texts customized to impressively realistic details including phone calls with fake voices that mimic your loved ones building fake corroboration [22].

SAP’s Head of Cybersecurity Market Strategy, Gabriele Fiata, took the words out of our mouths when he described it this way, “The threat landscape surrounding artificial intelligence (AI) is expanding at an alarming rate. Between January to February 2023, Darktrace researchers have observed a 135% increase in “novel social engineering” attacks, corresponding with the widespread adoption of ChatGPT” [23]. This is just the beginning. More malware as a service propagation, fake bank sites, travel scams, and fake IT support centers will multiply to scam and extort the weak including, elders, schools, local government, and small businesses. Then there is the increased likelihood that antivirus and data loss prevention (DLP) tools will become less effective as AI morphs. Lastly, cyber criminals can and will use generative AI for advanced evidence tampering by creating fake content to confuse or dirty the chain of custody, lessen reliability, or outright frame the wrong actor – while the government is confused and behind the tech sector. It is truly a digital arms race.

Fig. 4. ChatGPT Exploit Risk Infographic [24].

In the next section we will discuss the possibilities of how artificial intelligence (AI) can enhance information security increasing compliance, reducing risk, enabling new features of great value, and enabling application orchestration for threat visibility.

3. The Zero-Trust Security Model Becomes More Orchestrated via Artificial Intelligence (AI):

The zero-trust model assumes that no user or system, even those within the corporate network, should be trusted by default. Access controls are strictly enforced, and continuous verification is performed to ensure the legitimacy of users and devices. Zero-trust moves organizations to a need-to-know-only access mindset (least privilege) with inherent deny rules, all the while assuming you are compromised. This infers single sign-on at the personal device level and improved multifactor authentication. It also infers better role-based access controls (RBAC), firewalled networks, improved need-to-know policies, effective whitelisting and blacklisting of applications, group membership reviews, and state of the art privileged access management (PAM) tools. Password check out and vaulting tools like CyberArk will improve to better inform toxic combination monitoring and reporting. There is still work in selecting / building the right tech components that fit into (not work against) the infrastructure orchestra stack. However, we believe rapid build and deploy AI based custom middleware can alleviate security orchestration mismatches in many cases easily. All of this is likely to better automate and orchestrate zero-trust abilities so that one part does not hinder another part via complexity fog.

4. Artificial Intelligence (AI) Powered Threat Detection Has Improved Analytics:

Artificial intelligence (AI) is increasingly being used to enhance threat detection capabilities. Machine learning algorithms analyze vast amounts of data to identify patterns indicative of potential security threats. This enables quicker and more accurate identification of malicious activities. Security information and event management (SIEM) systems enhanced with improved machine learning algorithms can detect anomalies in network traffic, application logs, and data flow – helping organizations identify potential security incidents faster.

There will be reduced false positives which has been a sustained issue in the past with large overconfident companies repeatedly wasting millions of dollars per year fine tuning useless data security lakes (we have seen this) that mostly produce garbage anomaly detection reports [25], [26]. Literally the kind good artificial intelligence (AI) laughs at – we are getting there. All the while, the technology vendors try to solve this via better SIEM functionality for an increased price at present. Yet we expect prices to drop really low as the automation matures.  

With improved natural language processing (NLP) techniques, artificial intelligence (AI) systems can analyze unstructured data sources, such as social media feeds, photos, videos, and news articles – to assemble useful threat intelligence. This ability to process and understand textual data empowers organizations to stay informed about indicators of compromise (IOCs) and new attack tactics. Vendors that provide these services include Dark Trace, IBM, CrowdStrike, and many startups will likely join soon. This space is wide open and the biases of the past need to be forgotten if we want innovation. Young fresh minds who know web 3.0 are valuable here. Thus, in the future more companies will likely not have to buy but rather can build their own customized threat detection tools informed by advancements in AI platform technology.

5. Quantum-Safe Cryptography Ripens:

Quantum computing is a quickly evolving technology that uses the laws of quantum mechanics to solve problems too complex for traditional computers, like superposition and quantum interference [27]. Some cases where quantum computers can provide a speed boost include simulation of physical systems, machine learning (ML), optimization, and more. Traditional cryptographic algorithms could be vulnerable because they were built and coded with weaker technologies that have solvable patterns, at least in many cases. “Industry experts generally agree that within 7-10 years, a large-scale quantum computer may exist that can run Shor’s algorithm and break current public-key cryptography causing widespread vulnerabilities” [28]. Quantum-safe or quantum-resistant cryptography is designed to withstand attacks from quantum computers, often artificial intelligence (AI) assisted – ensuring the long-term security of sensitive data. For example, AI can help enhance post-quantum cryptographic algorithms such as lattice-based cryptography or hash-based cryptography to secure communications [29]. Lattice-based cryptography is a cryptographic system based on the mathematical concept of a lattice. In a lattice, lines connect points to form a geometric structure or grid (Fig. 5).

Fig. 5. Simple Lattice Cryptography Grid [30].


This geometric lattice structure encodes and decodes messages. Although it looks finite, the grid is not finite in any way. Rather, it represents a pattern that continues into the infinite (Fig. 6).

Fig. 6. Complex Lattice Cryptography Grid [31].

Lattice based cryptography benefits sensitive and highly targeted assets like large data centers, utilities, banks, hospitals, and government infrastructure generally. In other words, there will likely be mass adoption of quantum computing based encryption for better security. Lastly, we used ChatGPT as an assistant to compile the below specific benefits of quantum cryptography albeit with some manual corrections [32]:

  1. Detection of Eavesdropping:
    Quantum key distribution protocols can detect the presence of an eavesdropper by the disturbance introduced during the quantum measurement process, providing a level of security beyond traditional cryptography.
  2. Quantum-Safe Against Future Computers:
    Quantum computers have the potential to break many traditional cryptographic systems. Quantum cryptography is considered quantum-safe, as it relies on the fundamental principles of quantum mechanics rather than mathematical complexity.
  3. Near Unconditional Security:
    Quantum cryptography provides near unconditional security based on the principles of quantum mechanics. Any attempt to intercept or measure the quantum state will disturb the system, and this disturbance can be detected. Note that ChatGPT wrongly said “unconditional Security” and we corrected to “near unconditional security” as that is more realistic.

6. Artificial Intelligence (AI) Driven Threat Response Ability Advances:

Artificial intelligence (AI) is used not only for threat detection but also in automating response actions [33]. This can include automatically isolating compromised systems, blocking malicious internet protocol (IP) addresses, closing firewalls, or orchestrating a coordinated response to a cyber incident – all for less money. Security orchestration, automation, and response (SOAR) platforms leverage AI to analyze and respond to security incidents, allowing security teams to automate routine tasks and respond more rapidly to emerging threats. Microsoft Sentinel, Rapid7 InsightConnect, and FortiSOAR are just a few of the current examples. Basically, AI tools will help SOAR tools mature so security operations centers (SOCs) can catch the low hanging fruit; thus, they will have more time for analysis of more complex threats. These AI tools will employ the observe, orient, decide, act (OODA) Loop methodology [34]. This will allow them to stay up to date, customized, and informed of many zero-day exploits. At the same time, threat actors will constantly try to avert this with the same AI but with no governance.

7. Artificial Intelligence (AI) Streamlines Cloud Security Posture Management (CSPM):

As organizations increasingly migrate to cloud environments, ensuring the security of cloud assets becomes key. Vendors like Microsoft, Oracle, and Amazon Web Services (AWS) lead this space; yet large organizations have their own clouds for control as well. Cloud security posture management (CSPM) tools help organizations manage and secure their cloud infrastructure by continuously monitoring configurations and detecting misconfigurations that could lead to vulnerabilities [35]. These tools automatically assess cloud configurations for compliance with security best practices. This includes ensuring that only necessary ports are open, and that encryption is properly configured. “Keeping data safe in the cloud requires a layered defense that gives organizations clear visibility into the state of their data. This includes enabling organizations to monitor how each storage bucket is configured across all their storage services to ensure their data is not inadvertently exposed to unauthorized applications or users” [36]. This has considerations at both the cloud user and provider level especially considering artificial intelligence (AI) applications can be built and run inside the cloud for a variety of reasons. Importantly, these build designs often use approved plug ins from different vendors making it all the more complex.

8. Artificial Intelligence (AI) Enhanced Authentication Arrives:

Artificial intelligence (AI) is being utilized to strengthen user authentication methods. Behavioral biometrics, such as analyzing typing patterns, mouse movements and ram usage, can add an extra layer of security by recognizing the unique behavior of legitimate users. Systems that use AI to analyze user behavior can detect and flag suspicious activity, such as an unauthorized user attempting to access an account or escalate a privilege [37]. Two factor authentication remains the bare standard with many leading identity and access management (IAM) application makers including Okta, SailPoint, and Google experimenting with AI for improved analytics and functionality. Both two factor and multifactor authentication benefit from AI advancements with machine learning via real time access rights reassignment and improved role groupings [38]. However, multifactor remains stronger at this point because it includes something you are, biometrics. The jury is out on which method will remain the security leader because biometrics can be faked by AI [39]. Importantly, AI tools can remove fake accounts or orphaned accounts much more quickly, reducing risk. However, it likely will not get it right 100% of the time so there is a slight inconvenience.

Conclusion and Recommendations:

Artificial intelligence (AI) remains a leading catalyst for digital transformation in tech automation, identity and access management (IAM), big data analytics, technology orchestration, and collaboration tools. AI based quantum computing serves to bolster encryption when old methods are replaced. All of the government actions to incubate ethics in AI are a good start and the NIST AI Risk Management Framework (AI RMF) 1.0 is long overdue. It will likely be tweaked based on private sector feedback. However, adding the DOD five principles for the ethical development of AI to the NIST AI RMF could derive better synergies. This approach should be used by the private sector and academia in customized ways. AI product ethical deviations should be thought of as quality control and compliance issues and remediated immediately.

Organizations should consider forming an AI governance committee to make sure this unique risk is not overlooked or overly merged with traditional web / IT risk. ChatGPT is a good encyclopedia and a cool Boolean search tool, yet it got some things wrong about quantum computing in this article for which we cited and corrected. The Simplified AI text to graphics generator was cool and useful but it needed some manual edits as well. Both of these generative AI tools will likely get better with time.

Artificial intelligence (AI) will spur many mobile malware and ransomware variants faster than Apple and Google can block them. This in conjunction with the fact that people more often have no mobile antivirus on their smart phone even if they have it on their personal and work computers, and a culture of happy go lucky application downloading makes it all the worse. As a result, more breaches should be expected via smart phones / watches / eyeglasses from AI enabled threats.

Therefore, education and awareness around the review and removal of non-essential mobile applications is a top priority. Especially for mobile devices used separately or jointly for work purposes. Containerization is required via a mobile device management (MDM) tool such as JAMF, Hexnode, VMWare, or Citrix Endpoint Management. A bring your own device (BYOD) policy needs to be written, followed, and updated often informed by need-to-know and role-based access (RBAC) principles. This requires a better understanding of geolocation, QR code scanning, couponing, digital signage, in-text ads, micropayments, Bluetooth, geofencing, e-readers, HTML5, etc. Organizations should consider forming a mobile ecosystem security committee to make sure this unique risk is not overlooked or overly merged with traditional web / IT risk. Mapping the mobile ecosystem components in detail is a must including the AI touch points.

The growth and acceptability of mass work from home (WFH) combined with the mass resignation / gig economy remind employers that great pay and culture alone are not enough to keep top talent. At this point AI only takes away some simple jobs but creates AI support jobs, yet the percents of this are not clear this early. Signing bonuses and personalized treatment are likely needed for those with top talent. We no longer have the same office and thus less badge access is needed. Single sign-on (SSO) will likely expand to personal devices (BYOD) and smart phones / watches / eyeglasses. Geolocation-based authentication is here to stay with double biometrics, likely fingerprint, eye scan, typing patterns, and facial recognition. The security perimeter remains more defined by data analytics than physical / digital boundaries, and we should dashboard this with machine learning tools as the use cases evolve.

Cloud infrastructure will continue to grow fast creating perimeter and compliance complexity / fog. Organizations should preconfigure artificial intelligence (AI) based cloud-scale options and spend more on cloud-trained staff. They should also make sure that they are selecting more than two or three cloud providers, all separate from one another. This helps staff get cross-trained on different cloud platforms and plug in applications. It also mitigates risk and makes vendors bid more competitively. There is huge potential for AI synergies with Cloud Security Posture Management (CSPM) tools, and threat response tools – experimentation will likely yield future dividends. Organization should not be passive and stuck in old paradigms. The older generations should seek to learn from the younger generations without bias. Also, comprehensive logging is a must for AI tools.

In regard to cryptocurrency, non-fungible tokens (NFTs), initial coin offerings (ICOs), and related exchanges – artificial intelligence (AI) will be used by crypto scammers and those seeking to launder money. Watch out for scammers who make big claims without details, no white papers or filings, or explanations at all. No matter what the investment, find out how it works and ask questions about where your money is going. Honest investment managers and advisors want to share that information and will back it up with details in many documents and filings [40]. Moreover, better blacklisting by crypto exchanges and banks is needed to stop these illicit transactions erroring far on the side of compliance. This requires us to pay more attention to knowing and monitoring our own social media baselines – emerging AI data analytics can help here. If you are for and use crypto mixer and / or splitter services then you run the risk of having your digital assets mixed with dirty digital assets, you have high fees, you have zero customer service, no regulatory protection, no decent Terms of Service and / or Privacy Policy if any, and you have no guarantee that it will even work the way you think it will.

As security professionals, we are patriots and defenders of wherever we live and work. We need to know what our social media baseline is across platforms. IT and security professionals need to realize that alleviating disinformation is about security before politics. We should not be afraid to talk about this because if we are, then our organizations will stay weak and outdated and we will be plied by the same artificial intelligence (AI) generated political bias that we fear confronting. More social media training is needed as many security professionals still think it is mostly an external marketing thing.

It’s best to assume AI tools are reading all social media posts and all other available articles, including this article which we entered into ChatGPT for feedback. It was slightly helpful pointing out other considerations. Public-to-private partnerships (InfraGard) need to improve and application to application permissions need to be more scrutinized. Everyone does not need to be a journalist, but everyone can have the common sense to identify AI / malware-inspired fake news. We must report undue AI bias in big tech from an IT, compliance, media, and a security perspective. We must also resist the temptation to jump on the AI hype bandwagon but rather should evaluate each tool and use case based on the real-world business outcomes for the foreseeable future.

About the Authors:

Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist / researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

Matthew Versaggi is a senior leader in artificial intelligence with large company healthcare experience who has seen hundreds of use-cases. He is a distinguished engineer, built an organization’s “College of Artificial Intelligence”, introduced and matured both cognitive AI technology and quantum computing, has been awarded multiple patents, is an experienced public speaker, entrepreneur, strategist and mentor, and has international business experience. He has an MBA in international business and economics and a MS in artificial intelligence from DePaul University, has a BS in finance and MIS and a BA in computer science from Alfred University. Lastly, he has nearly a dozen professional certificates in AI that are split between the AI, technology, and business strategy.

References:


[1] Swenson, Jeremy, and NIST; Mashup 12/15/2023; “Artificial Intelligence Risk Management Framework (AI RMF 1.0)”. 01/26/23: https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf.

[2] Swenson, Jeremy, and Simplified AI; AI Text to graphics generator. 01/08/24: https://app.simplified.com/

[3] Swenson, Jeremy, and ChatGPT; ChatGPT Logo Mashup. OpenAI. 12/15/23: https://chat.openai.com/auth/login

[4] The White House; “Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.”    10/30/23: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ 

[5] Nerdynav; “107 Up-to-Date ChatGPT Statistics & User Numbers [Dec 2023].” 12/06/23: https://nerdynav.com/chatgpt-statistics/

[6] Sun, Zhiyuan; “Two individuals indicted for $25M AI crypto trading scam: DOJ.” Cointelegraph. 12/12/23: https://cointelegraph.com/news/two-individuals-indicted-25m-ai-artificial-intelligence-crypto-trading-scam

[7] Nerdynav; “107 Up-to-Date ChatGPT Statistics & User Numbers [Dec 2023].” 12/06/23: https://nerdynav.com/chatgpt-statistics/

[8] Nerdynav; “107 Up-to-Date ChatGPT Statistics & User Numbers [Dec 2023].” 12/06/23: https://nerdynav.com/chatgpt-statistics/

[9] The White House; “Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.”    10/30/23: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ 

[10] EU. “EU AI Act: first regulation on artificial intelligence.” 12/19/23: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[11] Jackson, Amber; “Top 10 companies with ethical AI practices.” AI Magazine. 07/12/23: https://aimagazine.com/ai-strategy/top-10-companies-with-ethical-ai-practices

[12] Golbin, Ilana, and Axente, Maria Luciana; “9 ethical AI principles for organizations to follow.” World Economic Forum and PricewaterhouseCoopers (PWC). 06/23/21 https://www.weforum.org/agenda/2021/06/ethical-principles-for-ai/

[13] Lopez, Todd C; “DOD Adopts 5 Principles of Artificial Intelligence Ethics”. DOD News. 02/25/20: https://www.defense.gov/News/News-Stories/article/article/2094085/dod-adopts-5-principles-of-artificial-intelligence-ethics/

[14] Golbin, Ilana, and Axente, Maria Luciana; “9 ethical AI principles for organizations to follow.” World Economic Forum and PricewaterhouseCoopers (PWC). 06/23/21 https://www.weforum.org/agenda/2021/06/ethical-principles-for-ai/

[15] The Office of the Director of National Intelligence. “Principles of Artificial Intelligence Ethics for the Intelligence Community.” 07/23/20: https://www.dni.gov/index.php/newsroom/press-releases/press-releases-2020/3468-intelligence-community-releases-artificial-intelligence-principles-and-framework#:~:text=The%20Principles%20of%20AI%20Ethics,resilient%20by%20design%2C%20and%20incorporate

[16] NIST; “NIST AI RMF Playbook.” 01/26/23: https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook

[17] NIST; “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” 01/26/23: https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf

[18] NIST; “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” 01/26/23: https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf

[19] Crafts, Nicholas; “Artificial intelligence as a general-purpose technology: an historical perspective.” Oxford Review of Economic Policy. Volume 37, Issue 3, Autumn 2021: https://academic.oup.com/oxrep/article/37/3/521/6374675

[20] Sun, Zhiyuan; “Two individuals indicted for $25M AI crypto trading scam: DOJ.” Cointelegraph. 12/12/23: https://cointelegraph.com/news/two-individuals-indicted-25m-ai-artificial-intelligence-crypto-trading-scam

[21] Patel, Pranav; “ChatGPT brings forth new opportunities and challenges to the Cybersecurity industry.” LinkedIn Pulse. 04/03/23: https://www.linkedin.com/pulse/chatgpt-brings-forth-new-opportunities-challenges-industry-patel/

[22] FTC; “Preventing the Harms of AI-enabled Voice Cloning.” 11/16/23: https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/11/preventing-harms-ai-enabled-voice-cloning

[23] Fiata, Gabriele; “Why Evolving AI Threats Need AI-Powered Cybersecurity.” Forbes. 10/04/23: https://www.forbes.com/sites/sap/2023/10/04/why-evolving-ai-threats-need-ai-powered-cybersecurity/?sh=161bd78b72ed

[24] Patel, Pranav; “ChatGPT brings forth new opportunities and challenges to the Cybersecurity industry.” LinkedIn Pulse. 04/03/23: https://www.linkedin.com/pulse/chatgpt-brings-forth-new-opportunities-challenges-industry-patel/

[25] Tobin, Donal; “What Challenges Are Hindering the Success of Your Data Lake Initiative?” Integrate.io. 10/05/22: https://www.integrate.io/blog/data-lake-initiative/

[26] Chuvakin, Anton; “Why Your Security Data Lake Project Will … Well, Actually …” Medium. 10/22/22. https://medium.com/anton-on-security/why-your-security-data-lake-project-will-well-actually-78e0e360c292

[27] Amazon Web Services; “What are the types of quantum technology?” 01/07/23: https://aws.amazon.com/what-is/quantum-computing/ 

[28] ISARA Corporation; “What is Quantum-safe Cryptography?” 2023: https://www.isara.com/resources/what-is-quantum-safe.html

[29] Swenson, Jeremy, and ChatGPT; OpenAI. 12/15/23: https://chat.openai.com/auth/login

[30] Utimaco; “What is Lattice-based Cryptography? 2023: https://utimaco.com/service/knowledge-base/post-quantum-cryptography/what-lattice-based-cryptography

[31] D. Bernstein, and T. Lange; “Post-quantum cryptography – dealing with the fallout of physics success.” IACR Cryptology. 2017: https://www.semanticscholar.org/paper/Post-quantum-cryptography-dealing-with-the-fallout-Bernstein-Lange/a515aad9132a52b12a46f9a9e7ca2b02951c5b82

[32] Swenson, Jeremy, and ChatGPT; OpenAI. 12/15/23: https://chat.openai.com/auth/login

[33] Sibanda, Isla; “AI and Machine Learning: The Double-Edged Sword in Cybersecurity.” RSA Conference. 12/13/23: https://www.rsaconference.com/library/blog/ai-and-machine-learning-the-double-edged-sword-in-cybersecurity

[34] Michael, Katina, Abbas, Roba, and Roussos, George; “AI in Cybersecurity: The Paradox.” IEEE Transactions on Technology and Society. Vol. 4, no. 2: pg. 104-109. 2023: https://ieeexplore.ieee.org/abstract/document/10153442

[35] Microsoft; “What is CSPM?” 01/07/24: https://www.microsoft.com/en-us/security/business/security-101/what-is-cspm 

[36] Rosencrance, Linda; “How to choose the best cloud security posture management tools.” CSO Online. 10/30/23: https://www.csoonline.com/article/657138/how-to-choose-the-best-cloud-security-posture-management-tools.html

[37] Muneer, Salman Muneer, Muhammad Bux Alvi, and Amina Farrakh; “Cyber Security Event Detection Using Machine Learning Technique.” International Journal of Computational and Innovative Sciences. Vol. 2, no (2): pg. 42-46. 2023: https://ijcis.com/index.php/IJCIS/article/view/65.

[38] Azhar, Ishaq; “Identity Management Capability Powered by Artificial Intelligence to Transform the Way User Access Privileges Are Managed, Monitored and Controlled.” International Journal of Creative Research Thoughts (IJCRT), ISSN:2320-2882, Vol. 9, Issue 1: pg. 4719-4723. January 2021: https://ssrn.com/abstract=3905119

[39] FTC; “Preventing the Harms of AI-enabled Voice Cloning.” 11/16/23: https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/11/preventing-harms-ai-enabled-voice-cloning

[40] FTC; “What To Know About Cryptocurrency and Scams.” May 2022: https://consumer.ftc.gov/articles/what-know-about-cryptocurrency-and-scams

Abstract Forward Consulting Now Open For Business!

AbstractFwdHzTag300

In 2016 Mr. Swenson decided to go back to graduate school to pursue a second masters degree in Security Technologies at the University of MN’s renowned Technological Leadership Institute to position himself to launch a technology leadership consulting firm. This degree was completed in 2017 and positions Swenson as a creative and security savvy Sr. consultant to CIOs, CTOs, CEOs, and other business line leaders. His capstone was on “pre-cursor detection of data exfiltration” and included input from many of the regions CIOs, CISOs, CEOs, and state government leaders. His capstone advisor was technology and security pioneer Brian Isle of Adventium Labs.

Over 14 years, Mr. Swenson had the honor and privilege of consulting at 10 organizations in 7 industries on progressively complex and difficult problems in I.T. including: security, proj. mgmt., business analysis, data archival and governance, audit, web application launch and decommission, strategy, information security, data loss prevention, communication, and even board of directors governance. From governments, banks, insurance companies, minority-owned small businesses, marketing companies, technology companies, and healthcare companies, he has a wealth of abstract experience backed up by the knowledge from his 4 degrees and validated by his 40,000 followers (from LinkedIn, Twitter, and his blog). Impressively, the results are double-digit risk reductions, huge vetted process improvements, and $25+ million on average or more in savings per project!

As the desire for his contract consulting work has increased, he has continued to write and speak on how to achieve such great results. Often, he has been called upon to explain his process and style to organizations and people. While most accept it and get on board fast, some aren’t ready, mostly because they are stuck in the past and are afraid to admit their own errors due to confirmation bias. Two great technology leaders, Steve Jobs (Apple) and Carly Fiorina (HP) often described how doing things differently would have its detractors. Yet that is exactly why there is a need for Abstract Forward Consulting.

With the wind at our backs, we will press on because the world requires better results and we have higher standards (if you want to know more reach out below). With a heart to serve many organizations and people, we have synergized a hybrid blend of this process and experience to form a new consulting firm, one that puts abstract thinking first to reduce risk, improve security, and enhance business technology.

Proudly announcing: Abstract Forward Consulting, LLC.

Company Mission Statement: We use abstract thinking on security, risk, and technology problems to move business forward!

Company Vision: To be the premier provider of technology and security consulting services while making the world a better and safer place.

Main service offerings for I.T. and business leaders:

1) Management Consulting

2) Cyber Security Consulting

3) Risk Management Consulting

4) Data Governance Consulting

5) Enterprise Collaboration Tools Consulting

6) Process Improvement Consulting

If you want to have a free exploratory conversation on how we can help your organization please contact us here or inbox me. As our business grows, we will announce more people and tactics to build a tidal wave to make your organization the best it can be!

Thanks to the community for your support!

Founder and CEO: Abstract Forward Consulting, LLC.

Jeremy Swenson, MBA MSST (Master of Science In Security Technologies)

The Danger of Thinking Title Makes You A Leader (expanded)

socrates_fiorina_kodak

Leadership is about enabling the potential in others and getting out of the way so their dreams can enable something bigger. Having people paid to report to you does not mean you are a leader but more likely a manager, which is a very respectable and worthwhile career path but it is not leadership. It is not even close to leadership! When people choose to follow you without money or title, that is leadership. In this context, the title is derived from results and action first. As a leader, you are responsible for incubating synergies to get three out of two. Leadership is about influence, not title. Title is a mostly meaningless word that constantly changes in todays amorphous corporate culture.

Title without great external influence is not title at all. How can you move someone’s cheese when you can’t even move your community. Leadership STARTS at the community level and its nuclear power resides there. Community based leadership has overthrown a lot of ruthless dictators, leading scammers, and corporate bullies. Real leaders understand the value of academic inquiry (formal or informal), history, change, and that these things together are the precursor to innovation. They also understand that innovation is a team thing and they don’t seek to steal the spotlight.

Former H.P. CEO and Presidential candidate Carly Fiorina said it best this way, “leadership is about changing the order of things”. Changing the order of things is dangerous because it has many unknowns and it ruffles the feathers of those presently holding power. If you are truly a leader or aspire to be one, get ready to be attacked multiple times. All TRUE leaders are different and DO NOT FIT IN with most people or the status quo, and they are bullied, harassed and attacked, and that is the life they know. They can lead in times of great stress and controversy while the vast vast majority of people in the world could never even get close, and would break like a generic toothpick at the sign of light criticism.

Carly Fiorina On  Management Vs. Leadership – Stanford Univ. 2007.

Although a lot of executives say or believe they are leaders, their actions contradict that. All too often, they can’t handle the criticism that comes with true leadership and they are very often afraid of change, or people with abstract cultural personas. In many parts of their personal lives, they could not even pass the simplest leadership test of helping someone less fortunate than them when nobody else will in a disaster situation. Very often they insulate themselves with simple minded yes-sayers, fire people who question them, and are more often concerned with the superficial status that comes with being wined and dined by vendors that serve their vertical. Types like these are fools masquerading as leaders but there is plenty of them.

The real life of a leader is lonely and some think you’re crazy. The people (mostly fools) who think you’re crazy don’t understand diversity, the evolution of culture, true creativity, and they most likely could never connect the dots to realize any type of noteworthy synergy.  Yet they often hype up all kinds of useless nonsense to promote their fallacious status:

1) You can’t argue with me, I am a Director, therefore I am right. Truth: Delusional.
2) I am a VP, therefore, my ideas are innovative. Truth: No one credible declares innovation.
3) I am a 27-year-old director and won’t make time for you because I am in a leader development program. Truth: Leader development programs have next to no track record and teach corporate conformity. A leader development program would not have helped Bill Gates, Martin Luther King Jr., or Mark Zuckerberg.

With great respect for everyone, in my experience, the people making these types of arguments are the biggest fools of all and they are usually one trick ponies – good at one or two things only and for a short period of time. If you fall for them you have been scammed.

Examples of true leaders include Billy Corgan (alternative rock music pioneer), The Wright Brothers (building and flying the first airplane) William Kunstler (landmark civil rights attorney), John McAfee (anti-virus pioneer), and Steve Jobs (computer pioneer). These people were all criticized in their early years and pushed many people away from their inner circle. Although this criticism and isolation may have broken some people it did not break them.

Most often, real leaders don’t fit in with most people and unless they get fame or money they are ostracized. So many in our society are overly focused on fame, media hype, and money. Yet real leaders are not distracted by these immoral fallacies for they have nothing to do with life satisfaction, moral progress, or any type of synergy. Real leaders undeniably inspire movements, better people, processes, and with their vision and advocacy – society, business, and/or technology gets to heights never dreamed possible. Very few people see this at the time, though many are happy to jump on the bandwagon decades after its validated as cool by the masses.

Martin Luther King Jr. was one such leader and he paid the ultimate price but inspired a civil rights revolution that redefined America – William Kunstler defended him. Philosopher and teacher Socrates was unjustly condemned to death for questioning the current status quo of Athenian politics and society and for teaching students to do the same thing for a better world. Today his ideologies and approach have proven to be the foundation for much of Western philosophy and education. His name is associated with the Socratic Method, which means questioning everything. It is the hallmark of how law schools teach students throughout most of the world and it is a methodology that has proven to save the lives of thousands.

Yet some corporate leaders do not like to be questioned by even the most validated intellectuals. Case in point, when credible writer and analyst Bethany Mclean was questioning Enron CEO Jeff Skilling in 2001 about Enron’s public financials, he blew her off and created a smoke screen to cover up large scale fraud. It’s no surprise that Enron is now defunct, Skilling is in prison, and Mclean has been proven as the real leader. Having met her, having read her works, and having correspondence with her, I know she is everything that makes up a great leader. Great leaders have no problem taking questions from validated individuals of all walks and ranks because they have nothing to hide (including insecurities) and they can use the dialogue to advance their innovative mission. In the data-centric democracy of the United States, business and technology fads come and go, and now is about the new – false leadership will be short lived.

Socrates Condemned to Death Speech – 399 B.C.

I will take the person with the best ideas and passionate followers over someone who gloats about how prior titles prove anything. Titles by themselves and even with experience do not prove much at all. In the evolving and constantly changing landscape of technology, titles, for the most part, do not matter. Results, creativity, and inspirational empathetic leadership are what matter – emphasis added!!

If you focus too much on title, the guy or girl with the right idea will run you out of business and you and your whole team with be left with little money and no title. Please think long and hard about this, if you are claiming to be a leader. You don’t want to be like Kodak and fail to see digital cameras are the future, and you don’t want to be the leader who failed to see a data breach. You don’t want to be an overconfident leader who self-declares your morality over subordinate objections but who years and perhaps decades later is deemed as greatly immoral. You don’t want to be that executive whose peers support you only because they are paid to but really don’t respect you, and are not at all inspired by you. This happens a lot and this faulty leadership under good governance will be short lived.

Lastly, to that person who gloats about their V.P., Director, SVP title, or the like, ask them how many people would follow them passionately without money in times of great challenge while others criticize them. Likely, they will be confused, because most leaders are below the surface working to make the world a better place while the above fakers seek status and “yes” cliques. They know nothing about leadership or moral courage. To think that titles are a right-of-passage to leadership is one of the most dangerous fallacies in society to date. It has caused wars to be lost, inspired political violence, caused elections to be lost, technologies to be missed, and it is a solvable irony for a society as advanced and gifted as the human race. What are you doing to be your own best leader for the greater good of others? I assure you it has nothing to do with title.

If you want to talk more about these and related concepts, please contact me here.