Key Artificial Intelligence (AI) Cyber-Tech Trends and What it Means for the Future.

Minneapolis –

#cryptonews #cyberrisk #techrisk #techinnovation #techyearinreview #infosec #musktwitter #disinformation #cio #ciso #cto #chatgpt #openai #airisk #iam #rbac #artificialintelligence #samaltman #aiethics #nistai #futurereadybusiness #futureofai

By Jeremy Swenson & Matthew Versaggi

Fig. 1. Quantum ChatGPT Growth Plus NIST AI Risk Management Framework Mashup [1], [2], [3].

Summary:

This year is unique since policy makers and business leaders grew concerned with artificial intelligence (AI) ethics, disinformation morphed, AI had hyper growth including connections to increased crypto money laundering via splitting / mixing. Impressively, AI cyber tools become more capable in the areas of zero-trust orchestration, cloud security posture management (CSPM), threat response via improved machine learning, quantum-safe cryptography ripened, authentication made real time monitoring advancements, while some hype remains. Moreover, the mass resignation / gig economy (remote work) remained a large part of the catalyst for all of these trends.

Introduction:

Every year we like to research and comment on the most impactful security technology and business happenings from the prior year. This year is unique since policy makers and business leaders grew concerned with artificial intelligence (AI) ethics [4], disinformation morphed, AI had hyper growth [5], crypto money laundering via splitting / mixing grew [6], AI cyber tools became more capable – while the mass resignation / gig economy remained a large part of the catalyst for all of these trends. By August 2023 ChatGPT reached 1.43 billion website visits per month and about 180.5 million registered users [7]. This even attracted many non-technical naysayers. Impressively, the platform was only nine months old then and just turned a year old in November [8]. These numbers for AI tools like ChatGPT are going to continue to grow in many sectors at exponential rates. As a result, the below trends and considerations are likely to significantly impact government, education, high-tech, startups, and large enterprises in big and small ways, albeit with some surprises.

1. The Complex Ethics of Artificial Intelligence (AI) Swarms Policy Makers and Industry Resulting in New Frameworks:

The ethical use of artificial intelligence (AI) as a conceptual and increasingly practical dilemma has gained a lot of media attention and research in the last few years by those in philosophy (ethics, privacy), politics (public policy), academia (concepts and principles), and economics (trade policy and patents) – all who have weighed in heavily. As a result, we find this space is beginning to mature. Sovereign nations (The USA, EU, and elsewhere globally) have developed and socialized ethical policies and frameworks [9], [10]. While major corporations motivated by profit are all devising their own ethical vehicles and structures – often taking a legalistic view first [11]. Moreover, The World Economic Forum (WEF) has weighed in on this matter in collaboration with PricewaterhouseCoopers (PWC) [12]. All of this contributes to the accelerated pace of maturity of this area in general. The result is the establishment of shared conceptual viewpoints, early-stage security frameworks, accepted policies, guidelines, and governance structures to support the evolution of artificial intelligence (AI) in ethical ways.

For example, the Department of Defense (DOD) has formally adopted five principles for the ethical development of artificial intelligence capabilities as follows [13]:

  1. Responsible
  2. Equitable
  3. Traceable
  4. Reliable
  5. Governable

Traceable and governable seem to be the most clear and important principles, while equitable and responsible seem gray at best and they could be deemphasized in a heightened war time context. The latter two echo the corporate social responsibility (CSR) efforts found more often in the private sector.

The WEF via PWC has issued its Nine AI Ethical Principles for organizations to follow [14], and The Office of the Director of National Intelligence (ODNI) has released their Framework for AI Ethics [15]. Importantly, The National Institute For Standards in Technology (NIST) has released their AI Risk Management Framework as outlined in Fig. 2. and 3. They also released a playbook to support its implementation and have hosted several working sessions discussing it with industry which we attended virtually [16]. It seems the mapping aspect could take you down many AI rabbit holes, some unforeseen – inferring complex risk. Mapping also impacts how you measure and manage. None of this is fully clear and much of it will change as ethical AI governance matures.

Fig. 2. NIST AI Risk Management Framework (AI RMF) 1.0 [17].

Fig. 3. NIST AI Risk Management Framework: Actors Across AI Lifecycle Stages (AI RMF) 1.0 [18].

The actors in Fig. 3. cover a wide swath of spaces where artificial intelligence (AI) plays, and appropriately so as AI is considered a GPT (general purpose technology) like electricity, rubber, and the like – where it can be applied ubiquitously in our lives [19]. This infers cognitive technology, digital reality, ambient experiences, autonomous vehicles and drones, quantum computing, distributed ledgers, and robotics to name a few. These were all prior to the emergence of generative AI on the scene which will likely put these vehicles to the test much earlier than expected. Yet all of these can be mapped across the AI lifecycle stages in Fig. 3. to clarify the activities, actors, dimensions, and if it gets to build, then more scrutiny will need to be applied.

Scrutiny can come in the form of DevSecOps but that is extremely hard to do with such exponentially massive AI code datasets required by the learning models, at least at this point. Moreover, we are not sure if any AI ethics framework does justice to quality assurance (QA) and secure coding best practices much at this point. However, the above two NIST figures at least clarify relationships, flows, inputs and outputs, but all of this will need to be greatly customized to an organization to have any teeth. We imagine those use cases will come out of future NIST working sessions with industry.

Lastly, the most crucial factor in AI ethics governance is what Fig. 3. calls “People and Planet”. This is because the people and planet can experience the negative aspects of AI in ways the designers did not imagine, and that feedback is valuable to product governance to prevent bigger AI disasters. For example, AI taking control of the air traffic control system and causing reroutes or accidents, or AI malware spreading faster than antivirus products can defend it creating a cyber pandemic. Thus, making sure bias is reduced and safety increased (DOD five AI principles) is key but certainly not easy or clear.

2. ChatGPT and Other Artificial Intelligence (AI) Tools Have Huge Security Risks:

It is fair to start off discussing the risks posed by ChatGPT and related tools to balance out all the positive feature coverage in the media and popular culture in recent months. First of all, with artificial intelligence (AI), every cyber threat actor has a new tool to better send spam, steal data, spread malware, build misinformation mills, grow botnets, launder cryptocurrency through shady exchanges [20], create fake profiles on multiple platforms, create fake romance chatbots, and to build the most complex self-replicating malware that will be akin to zero-day exploits much of the time.

One commentator described it this way in his well circulated LinkedIn article, “It can potentially be a formidable social engineering and phishing weapon where non-native speakers can create flawlessly written phishing emails. Also, it will be much simpler for all scammers to mimic their intended victim’s tone, word choice, and writing style, making it more difficult than ever for recipients to tell the difference between a genuine and fraudulent email” [21]. Think of MailChimp on steroids with a sophisticated AI team crafting millions and billions of phishing e-mails / texts customized to impressively realistic details including phone calls with fake voices that mimic your loved ones building fake corroboration [22].

SAP’s Head of Cybersecurity Market Strategy, Gabriele Fiata, took the words out of our mouths when he described it this way, “The threat landscape surrounding artificial intelligence (AI) is expanding at an alarming rate. Between January to February 2023, Darktrace researchers have observed a 135% increase in “novel social engineering” attacks, corresponding with the widespread adoption of ChatGPT” [23]. This is just the beginning. More malware as a service propagation, fake bank sites, travel scams, and fake IT support centers will multiply to scam and extort the weak including, elders, schools, local government, and small businesses. Then there is the increased likelihood that antivirus and data loss prevention (DLP) tools will become less effective as AI morphs. Lastly, cyber criminals can and will use generative AI for advanced evidence tampering by creating fake content to confuse or dirty the chain of custody, lessen reliability, or outright frame the wrong actor – while the government is confused and behind the tech sector. It is truly a digital arms race.

Fig. 4. ChatGPT Exploit Risk Infographic [24].

In the next section we will discuss the possibilities of how artificial intelligence (AI) can enhance information security increasing compliance, reducing risk, enabling new features of great value, and enabling application orchestration for threat visibility.

3. The Zero-Trust Security Model Becomes More Orchestrated via Artificial Intelligence (AI):

The zero-trust model assumes that no user or system, even those within the corporate network, should be trusted by default. Access controls are strictly enforced, and continuous verification is performed to ensure the legitimacy of users and devices. Zero-trust moves organizations to a need-to-know-only access mindset (least privilege) with inherent deny rules, all the while assuming you are compromised. This infers single sign-on at the personal device level and improved multifactor authentication. It also infers better role-based access controls (RBAC), firewalled networks, improved need-to-know policies, effective whitelisting and blacklisting of applications, group membership reviews, and state of the art privileged access management (PAM) tools. Password check out and vaulting tools like CyberArk will improve to better inform toxic combination monitoring and reporting. There is still work in selecting / building the right tech components that fit into (not work against) the infrastructure orchestra stack. However, we believe rapid build and deploy AI based custom middleware can alleviate security orchestration mismatches in many cases easily. All of this is likely to better automate and orchestrate zero-trust abilities so that one part does not hinder another part via complexity fog.

4. Artificial Intelligence (AI) Powered Threat Detection Has Improved Analytics:

Artificial intelligence (AI) is increasingly being used to enhance threat detection capabilities. Machine learning algorithms analyze vast amounts of data to identify patterns indicative of potential security threats. This enables quicker and more accurate identification of malicious activities. Security information and event management (SIEM) systems enhanced with improved machine learning algorithms can detect anomalies in network traffic, application logs, and data flow – helping organizations identify potential security incidents faster.

There will be reduced false positives which has been a sustained issue in the past with large overconfident companies repeatedly wasting millions of dollars per year fine tuning useless data security lakes (we have seen this) that mostly produce garbage anomaly detection reports [25], [26]. Literally the kind good artificial intelligence (AI) laughs at – we are getting there. All the while, the technology vendors try to solve this via better SIEM functionality for an increased price at present. Yet we expect prices to drop really low as the automation matures.  

With improved natural language processing (NLP) techniques, artificial intelligence (AI) systems can analyze unstructured data sources, such as social media feeds, photos, videos, and news articles – to assemble useful threat intelligence. This ability to process and understand textual data empowers organizations to stay informed about indicators of compromise (IOCs) and new attack tactics. Vendors that provide these services include Dark Trace, IBM, CrowdStrike, and many startups will likely join soon. This space is wide open and the biases of the past need to be forgotten if we want innovation. Young fresh minds who know web 3.0 are valuable here. Thus, in the future more companies will likely not have to buy but rather can build their own customized threat detection tools informed by advancements in AI platform technology.

5. Quantum-Safe Cryptography Ripens:

Quantum computing is a quickly evolving technology that uses the laws of quantum mechanics to solve problems too complex for traditional computers, like superposition and quantum interference [27]. Some cases where quantum computers can provide a speed boost include simulation of physical systems, machine learning (ML), optimization, and more. Traditional cryptographic algorithms could be vulnerable because they were built and coded with weaker technologies that have solvable patterns, at least in many cases. “Industry experts generally agree that within 7-10 years, a large-scale quantum computer may exist that can run Shor’s algorithm and break current public-key cryptography causing widespread vulnerabilities” [28]. Quantum-safe or quantum-resistant cryptography is designed to withstand attacks from quantum computers, often artificial intelligence (AI) assisted – ensuring the long-term security of sensitive data. For example, AI can help enhance post-quantum cryptographic algorithms such as lattice-based cryptography or hash-based cryptography to secure communications [29]. Lattice-based cryptography is a cryptographic system based on the mathematical concept of a lattice. In a lattice, lines connect points to form a geometric structure or grid (Fig. 5).

Fig. 5. Simple Lattice Cryptography Grid [30].


This geometric lattice structure encodes and decodes messages. Although it looks finite, the grid is not finite in any way. Rather, it represents a pattern that continues into the infinite (Fig. 6).

Fig. 6. Complex Lattice Cryptography Grid [31].

Lattice based cryptography benefits sensitive and highly targeted assets like large data centers, utilities, banks, hospitals, and government infrastructure generally. In other words, there will likely be mass adoption of quantum computing based encryption for better security. Lastly, we used ChatGPT as an assistant to compile the below specific benefits of quantum cryptography albeit with some manual corrections [32]:

  1. Detection of Eavesdropping:
    Quantum key distribution protocols can detect the presence of an eavesdropper by the disturbance introduced during the quantum measurement process, providing a level of security beyond traditional cryptography.
  2. Quantum-Safe Against Future Computers:
    Quantum computers have the potential to break many traditional cryptographic systems. Quantum cryptography is considered quantum-safe, as it relies on the fundamental principles of quantum mechanics rather than mathematical complexity.
  3. Near Unconditional Security:
    Quantum cryptography provides near unconditional security based on the principles of quantum mechanics. Any attempt to intercept or measure the quantum state will disturb the system, and this disturbance can be detected. Note that ChatGPT wrongly said “unconditional Security” and we corrected to “near unconditional security” as that is more realistic.

6. Artificial Intelligence (AI) Driven Threat Response Ability Advances:

Artificial intelligence (AI) is used not only for threat detection but also in automating response actions [33]. This can include automatically isolating compromised systems, blocking malicious internet protocol (IP) addresses, closing firewalls, or orchestrating a coordinated response to a cyber incident – all for less money. Security orchestration, automation, and response (SOAR) platforms leverage AI to analyze and respond to security incidents, allowing security teams to automate routine tasks and respond more rapidly to emerging threats. Microsoft Sentinel, Rapid7 InsightConnect, and FortiSOAR are just a few of the current examples. Basically, AI tools will help SOAR tools mature so security operations centers (SOCs) can catch the low hanging fruit; thus, they will have more time for analysis of more complex threats. These AI tools will employ the observe, orient, decide, act (OODA) Loop methodology [34]. This will allow them to stay up to date, customized, and informed of many zero-day exploits. At the same time, threat actors will constantly try to avert this with the same AI but with no governance.

7. Artificial Intelligence (AI) Streamlines Cloud Security Posture Management (CSPM):

As organizations increasingly migrate to cloud environments, ensuring the security of cloud assets becomes key. Vendors like Microsoft, Oracle, and Amazon Web Services (AWS) lead this space; yet large organizations have their own clouds for control as well. Cloud security posture management (CSPM) tools help organizations manage and secure their cloud infrastructure by continuously monitoring configurations and detecting misconfigurations that could lead to vulnerabilities [35]. These tools automatically assess cloud configurations for compliance with security best practices. This includes ensuring that only necessary ports are open, and that encryption is properly configured. “Keeping data safe in the cloud requires a layered defense that gives organizations clear visibility into the state of their data. This includes enabling organizations to monitor how each storage bucket is configured across all their storage services to ensure their data is not inadvertently exposed to unauthorized applications or users” [36]. This has considerations at both the cloud user and provider level especially considering artificial intelligence (AI) applications can be built and run inside the cloud for a variety of reasons. Importantly, these build designs often use approved plug ins from different vendors making it all the more complex.

8. Artificial Intelligence (AI) Enhanced Authentication Arrives:

Artificial intelligence (AI) is being utilized to strengthen user authentication methods. Behavioral biometrics, such as analyzing typing patterns, mouse movements and ram usage, can add an extra layer of security by recognizing the unique behavior of legitimate users. Systems that use AI to analyze user behavior can detect and flag suspicious activity, such as an unauthorized user attempting to access an account or escalate a privilege [37]. Two factor authentication remains the bare standard with many leading identity and access management (IAM) application makers including Okta, SailPoint, and Google experimenting with AI for improved analytics and functionality. Both two factor and multifactor authentication benefit from AI advancements with machine learning via real time access rights reassignment and improved role groupings [38]. However, multifactor remains stronger at this point because it includes something you are, biometrics. The jury is out on which method will remain the security leader because biometrics can be faked by AI [39]. Importantly, AI tools can remove fake accounts or orphaned accounts much more quickly, reducing risk. However, it likely will not get it right 100% of the time so there is a slight inconvenience.

Conclusion and Recommendations:

Artificial intelligence (AI) remains a leading catalyst for digital transformation in tech automation, identity and access management (IAM), big data analytics, technology orchestration, and collaboration tools. AI based quantum computing serves to bolster encryption when old methods are replaced. All of the government actions to incubate ethics in AI are a good start and the NIST AI Risk Management Framework (AI RMF) 1.0 is long overdue. It will likely be tweaked based on private sector feedback. However, adding the DOD five principles for the ethical development of AI to the NIST AI RMF could derive better synergies. This approach should be used by the private sector and academia in customized ways. AI product ethical deviations should be thought of as quality control and compliance issues and remediated immediately.

Organizations should consider forming an AI governance committee to make sure this unique risk is not overlooked or overly merged with traditional web / IT risk. ChatGPT is a good encyclopedia and a cool Boolean search tool, yet it got some things wrong about quantum computing in this article for which we cited and corrected. The Simplified AI text to graphics generator was cool and useful but it needed some manual edits as well. Both of these generative AI tools will likely get better with time.

Artificial intelligence (AI) will spur many mobile malware and ransomware variants faster than Apple and Google can block them. This in conjunction with the fact that people more often have no mobile antivirus on their smart phone even if they have it on their personal and work computers, and a culture of happy go lucky application downloading makes it all the worse. As a result, more breaches should be expected via smart phones / watches / eyeglasses from AI enabled threats.

Therefore, education and awareness around the review and removal of non-essential mobile applications is a top priority. Especially for mobile devices used separately or jointly for work purposes. Containerization is required via a mobile device management (MDM) tool such as JAMF, Hexnode, VMWare, or Citrix Endpoint Management. A bring your own device (BYOD) policy needs to be written, followed, and updated often informed by need-to-know and role-based access (RBAC) principles. This requires a better understanding of geolocation, QR code scanning, couponing, digital signage, in-text ads, micropayments, Bluetooth, geofencing, e-readers, HTML5, etc. Organizations should consider forming a mobile ecosystem security committee to make sure this unique risk is not overlooked or overly merged with traditional web / IT risk. Mapping the mobile ecosystem components in detail is a must including the AI touch points.

The growth and acceptability of mass work from home (WFH) combined with the mass resignation / gig economy remind employers that great pay and culture alone are not enough to keep top talent. At this point AI only takes away some simple jobs but creates AI support jobs, yet the percents of this are not clear this early. Signing bonuses and personalized treatment are likely needed for those with top talent. We no longer have the same office and thus less badge access is needed. Single sign-on (SSO) will likely expand to personal devices (BYOD) and smart phones / watches / eyeglasses. Geolocation-based authentication is here to stay with double biometrics, likely fingerprint, eye scan, typing patterns, and facial recognition. The security perimeter remains more defined by data analytics than physical / digital boundaries, and we should dashboard this with machine learning tools as the use cases evolve.

Cloud infrastructure will continue to grow fast creating perimeter and compliance complexity / fog. Organizations should preconfigure artificial intelligence (AI) based cloud-scale options and spend more on cloud-trained staff. They should also make sure that they are selecting more than two or three cloud providers, all separate from one another. This helps staff get cross-trained on different cloud platforms and plug in applications. It also mitigates risk and makes vendors bid more competitively. There is huge potential for AI synergies with Cloud Security Posture Management (CSPM) tools, and threat response tools – experimentation will likely yield future dividends. Organization should not be passive and stuck in old paradigms. The older generations should seek to learn from the younger generations without bias. Also, comprehensive logging is a must for AI tools.

In regard to cryptocurrency, non-fungible tokens (NFTs), initial coin offerings (ICOs), and related exchanges – artificial intelligence (AI) will be used by crypto scammers and those seeking to launder money. Watch out for scammers who make big claims without details, no white papers or filings, or explanations at all. No matter what the investment, find out how it works and ask questions about where your money is going. Honest investment managers and advisors want to share that information and will back it up with details in many documents and filings [40]. Moreover, better blacklisting by crypto exchanges and banks is needed to stop these illicit transactions erroring far on the side of compliance. This requires us to pay more attention to knowing and monitoring our own social media baselines – emerging AI data analytics can help here. If you are for and use crypto mixer and / or splitter services then you run the risk of having your digital assets mixed with dirty digital assets, you have high fees, you have zero customer service, no regulatory protection, no decent Terms of Service and / or Privacy Policy if any, and you have no guarantee that it will even work the way you think it will.

As security professionals, we are patriots and defenders of wherever we live and work. We need to know what our social media baseline is across platforms. IT and security professionals need to realize that alleviating disinformation is about security before politics. We should not be afraid to talk about this because if we are, then our organizations will stay weak and outdated and we will be plied by the same artificial intelligence (AI) generated political bias that we fear confronting. More social media training is needed as many security professionals still think it is mostly an external marketing thing.

It’s best to assume AI tools are reading all social media posts and all other available articles, including this article which we entered into ChatGPT for feedback. It was slightly helpful pointing out other considerations. Public-to-private partnerships (InfraGard) need to improve and application to application permissions need to be more scrutinized. Everyone does not need to be a journalist, but everyone can have the common sense to identify AI / malware-inspired fake news. We must report undue AI bias in big tech from an IT, compliance, media, and a security perspective. We must also resist the temptation to jump on the AI hype bandwagon but rather should evaluate each tool and use case based on the real-world business outcomes for the foreseeable future.

About the Authors:

Jeremy Swenson is a disruptive-thinking security entrepreneur, futurist / researcher, and senior management tech risk consultant. He is a frequent speaker, published writer, podcaster, and even does some pro bono consulting in these areas. He holds an MBA from St. Mary’s University of MN, an MSST (Master of Science in Security Technologies) degree from the University of Minnesota, and a BA in political science from the University of Wisconsin Eau Claire. He is an alum of the Federal Reserve Secure Payment Task Force, the Crystal, Robbinsdale and New Hope Citizens Police Academy, and the Minneapolis FBI Citizens Academy.

Matthew Versaggi is a senior leader in artificial intelligence with large company healthcare experience who has seen hundreds of use-cases. He is a distinguished engineer, built an organization’s “College of Artificial Intelligence”, introduced and matured both cognitive AI technology and quantum computing, has been awarded multiple patents, is an experienced public speaker, entrepreneur, strategist and mentor, and has international business experience. He has an MBA in international business and economics and a MS in artificial intelligence from DePaul University, has a BS in finance and MIS and a BA in computer science from Alfred University. Lastly, he has nearly a dozen professional certificates in AI that are split between the AI, technology, and business strategy.

References:


[1] Swenson, Jeremy, and NIST; Mashup 12/15/2023; “Artificial Intelligence Risk Management Framework (AI RMF 1.0)”. 01/26/23: https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf.

[2] Swenson, Jeremy, and Simplified AI; AI Text to graphics generator. 01/08/24: https://app.simplified.com/

[3] Swenson, Jeremy, and ChatGPT; ChatGPT Logo Mashup. OpenAI. 12/15/23: https://chat.openai.com/auth/login

[4] The White House; “Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.”    10/30/23: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ 

[5] Nerdynav; “107 Up-to-Date ChatGPT Statistics & User Numbers [Dec 2023].” 12/06/23: https://nerdynav.com/chatgpt-statistics/

[6] Sun, Zhiyuan; “Two individuals indicted for $25M AI crypto trading scam: DOJ.” Cointelegraph. 12/12/23: https://cointelegraph.com/news/two-individuals-indicted-25m-ai-artificial-intelligence-crypto-trading-scam

[7] Nerdynav; “107 Up-to-Date ChatGPT Statistics & User Numbers [Dec 2023].” 12/06/23: https://nerdynav.com/chatgpt-statistics/

[8] Nerdynav; “107 Up-to-Date ChatGPT Statistics & User Numbers [Dec 2023].” 12/06/23: https://nerdynav.com/chatgpt-statistics/

[9] The White House; “Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.”    10/30/23: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ 

[10] EU. “EU AI Act: first regulation on artificial intelligence.” 12/19/23: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[11] Jackson, Amber; “Top 10 companies with ethical AI practices.” AI Magazine. 07/12/23: https://aimagazine.com/ai-strategy/top-10-companies-with-ethical-ai-practices

[12] Golbin, Ilana, and Axente, Maria Luciana; “9 ethical AI principles for organizations to follow.” World Economic Forum and PricewaterhouseCoopers (PWC). 06/23/21 https://www.weforum.org/agenda/2021/06/ethical-principles-for-ai/

[13] Lopez, Todd C; “DOD Adopts 5 Principles of Artificial Intelligence Ethics”. DOD News. 02/25/20: https://www.defense.gov/News/News-Stories/article/article/2094085/dod-adopts-5-principles-of-artificial-intelligence-ethics/

[14] Golbin, Ilana, and Axente, Maria Luciana; “9 ethical AI principles for organizations to follow.” World Economic Forum and PricewaterhouseCoopers (PWC). 06/23/21 https://www.weforum.org/agenda/2021/06/ethical-principles-for-ai/

[15] The Office of the Director of National Intelligence. “Principles of Artificial Intelligence Ethics for the Intelligence Community.” 07/23/20: https://www.dni.gov/index.php/newsroom/press-releases/press-releases-2020/3468-intelligence-community-releases-artificial-intelligence-principles-and-framework#:~:text=The%20Principles%20of%20AI%20Ethics,resilient%20by%20design%2C%20and%20incorporate

[16] NIST; “NIST AI RMF Playbook.” 01/26/23: https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook

[17] NIST; “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” 01/26/23: https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf

[18] NIST; “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” 01/26/23: https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf

[19] Crafts, Nicholas; “Artificial intelligence as a general-purpose technology: an historical perspective.” Oxford Review of Economic Policy. Volume 37, Issue 3, Autumn 2021: https://academic.oup.com/oxrep/article/37/3/521/6374675

[20] Sun, Zhiyuan; “Two individuals indicted for $25M AI crypto trading scam: DOJ.” Cointelegraph. 12/12/23: https://cointelegraph.com/news/two-individuals-indicted-25m-ai-artificial-intelligence-crypto-trading-scam

[21] Patel, Pranav; “ChatGPT brings forth new opportunities and challenges to the Cybersecurity industry.” LinkedIn Pulse. 04/03/23: https://www.linkedin.com/pulse/chatgpt-brings-forth-new-opportunities-challenges-industry-patel/

[22] FTC; “Preventing the Harms of AI-enabled Voice Cloning.” 11/16/23: https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/11/preventing-harms-ai-enabled-voice-cloning

[23] Fiata, Gabriele; “Why Evolving AI Threats Need AI-Powered Cybersecurity.” Forbes. 10/04/23: https://www.forbes.com/sites/sap/2023/10/04/why-evolving-ai-threats-need-ai-powered-cybersecurity/?sh=161bd78b72ed

[24] Patel, Pranav; “ChatGPT brings forth new opportunities and challenges to the Cybersecurity industry.” LinkedIn Pulse. 04/03/23: https://www.linkedin.com/pulse/chatgpt-brings-forth-new-opportunities-challenges-industry-patel/

[25] Tobin, Donal; “What Challenges Are Hindering the Success of Your Data Lake Initiative?” Integrate.io. 10/05/22: https://www.integrate.io/blog/data-lake-initiative/

[26] Chuvakin, Anton; “Why Your Security Data Lake Project Will … Well, Actually …” Medium. 10/22/22. https://medium.com/anton-on-security/why-your-security-data-lake-project-will-well-actually-78e0e360c292

[27] Amazon Web Services; “What are the types of quantum technology?” 01/07/23: https://aws.amazon.com/what-is/quantum-computing/ 

[28] ISARA Corporation; “What is Quantum-safe Cryptography?” 2023: https://www.isara.com/resources/what-is-quantum-safe.html

[29] Swenson, Jeremy, and ChatGPT; OpenAI. 12/15/23: https://chat.openai.com/auth/login

[30] Utimaco; “What is Lattice-based Cryptography? 2023: https://utimaco.com/service/knowledge-base/post-quantum-cryptography/what-lattice-based-cryptography

[31] D. Bernstein, and T. Lange; “Post-quantum cryptography – dealing with the fallout of physics success.” IACR Cryptology. 2017: https://www.semanticscholar.org/paper/Post-quantum-cryptography-dealing-with-the-fallout-Bernstein-Lange/a515aad9132a52b12a46f9a9e7ca2b02951c5b82

[32] Swenson, Jeremy, and ChatGPT; OpenAI. 12/15/23: https://chat.openai.com/auth/login

[33] Sibanda, Isla; “AI and Machine Learning: The Double-Edged Sword in Cybersecurity.” RSA Conference. 12/13/23: https://www.rsaconference.com/library/blog/ai-and-machine-learning-the-double-edged-sword-in-cybersecurity

[34] Michael, Katina, Abbas, Roba, and Roussos, George; “AI in Cybersecurity: The Paradox.” IEEE Transactions on Technology and Society. Vol. 4, no. 2: pg. 104-109. 2023: https://ieeexplore.ieee.org/abstract/document/10153442

[35] Microsoft; “What is CSPM?” 01/07/24: https://www.microsoft.com/en-us/security/business/security-101/what-is-cspm 

[36] Rosencrance, Linda; “How to choose the best cloud security posture management tools.” CSO Online. 10/30/23: https://www.csoonline.com/article/657138/how-to-choose-the-best-cloud-security-posture-management-tools.html

[37] Muneer, Salman Muneer, Muhammad Bux Alvi, and Amina Farrakh; “Cyber Security Event Detection Using Machine Learning Technique.” International Journal of Computational and Innovative Sciences. Vol. 2, no (2): pg. 42-46. 2023: https://ijcis.com/index.php/IJCIS/article/view/65.

[38] Azhar, Ishaq; “Identity Management Capability Powered by Artificial Intelligence to Transform the Way User Access Privileges Are Managed, Monitored and Controlled.” International Journal of Creative Research Thoughts (IJCRT), ISSN:2320-2882, Vol. 9, Issue 1: pg. 4719-4723. January 2021: https://ssrn.com/abstract=3905119

[39] FTC; “Preventing the Harms of AI-enabled Voice Cloning.” 11/16/23: https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/11/preventing-harms-ai-enabled-voice-cloning

[40] FTC; “What To Know About Cryptocurrency and Scams.” May 2022: https://consumer.ftc.gov/articles/what-know-about-cryptocurrency-and-scams

Five Unique Tech Trends in 2018 and Implications For 2019

By Jeremy Swenson, MBA, MSST Angish Mebratu, MBA.

Every year we like to review and commentate on the most impactful technology and business concepts from the prior year. Those that are likely to significantly impact the coming year. Although incomplete, these are five areas worth addressing.

5. 5G Expansion Will Spur Business Innovation

Fig. 1. 1G to 5G Growth, Stock, 2018.

2018 was the year 5G moved from hype to reality, and it will become more widespread as the communications supply chain adopts it in 2019. 5G is the next iteration of mobile connectivity and it aims to be much faster and more reliable than 4G, 3G, etc. Impressively, data speeds with 5G are 10 to 100 times faster than 4G. The benefits of this includes enabling: smart IoT connected cities, seamless 8K video streaming, improved virtual reality styled gaming, self-driving cars that communicate with each other without disruption thereby enhancing safety and reliability, and improved virtual reality glasses (HoloLens, Google Glass, etc.) providing a new way of looking at the world around us.

As emerging technologies such as artificial intelligence (AI), blockchain, the Internet of Things (IoT), and edge computing — the practice of processing data near the edge of the network where the data is being generated, not a centralized data-processing repository — take hold everywhere, 5G can offer the advancements necessary to truly take advantage of them. These technologies require 5Gs bolstered data transfer speeds, interoperability, and its improved reliability. Homes will get smarter, hospitals will be able to provide more intelligent care, the Internet of Things will go into hyperdrive — the implications of 5G are massive. Yet most importantly, 5G has much less latency, thereby enabling futuristic real-time application experimentation.

“There’s no doubt that much of the recent 5G activity has been focused on investments from service providers and equipment manufacturers,” Nick Lippis, co-founder and co-chairman of the Open Networking User Group (Kym Gilhooly, BizTech, 11/08/18). “However, more IT leaders are starting to make plans for 5G, which includes determining its impact on their data center architecture, procurement strategies and the solutions they’ll roll out”(Kym Gilhooly, BizTech, 11/08/18). 

AT&T is one of the leaders in 5G distribution and as of 12/27/18 they have service up and running in these 12 cities: Atlanta, Charlotte, Dallas, Houston, Indianapolis, Jacksonville, Louisville, Oklahoma City, New Orleans, Raleigh, San Antonio and Waco (CNN Wire, 12/27/18). Verizon has a similar initiative in an earlier phase in some cities. While Google has Google Fiber is some cities, but there is lots of debate about if its better or worse than 5G – time will tell. More data and faster speeds derive more connected devices which need security, data protection, and privacy — failure to protect it aggressively derives to much risk at high costs.

Fig. 2. Likely 5G Use Cases in 2020, Stock, 2018.

4. Browser/Device Fingerprinting Growth Will Spur Better PET (Privacy Enhancing Technologies)

Browser fingerprinting is a method in which websites gather bits of information about your visit including your time zone, set of installed fonts, language preferences, some plug-in information, etc (Bill Budington, Bennett Cyphers, Alan Toner, and Jeremy Gillula, Electronic Freedom Foundation, 12/22/18). These data elements are then combined to form a unique fingerprint that identifies your browser or more. The next step is to identify your specific device, and then you individually.

Fig. 3. Browser Finger Printing Data, Stock, 2018.

Device fingerprinting overcomes some of the inefficiencies of using other means of customer-tracking. Most notably, this includes cookies installed in web browsers, which businesses have long used monitor user behavior when we visit their websites (Bernard Marr, Forbes, 06/23/17). Employers do this at a much more invasive level, but the pay is the tradeoff. Yet when employees use their own mobile device for work-related things, protection of their personal data is best achieved via data containerization tools like AirWatch and Centrify. Even on these devices, the problem is that cookies can be deleted whenever we want. Its relatively easy for us to stop specific sites, services or companies from using them to track us — depending on how technical we are. Device fingerprinting doesn’t have this limitation as it doesn’t rely on storing data locally on our machines, instead, it simply monitors data transmitted and received as devices connect with each other” (Bernard Marr, Forbes, 06/23/17).

This type of data exploitation, even with the user’s consent, allows for more complexity and thus higher malware or SPAM/advertising risk. Antivirus makers are challenged to stay ahead of these exploits. The GDPR (General Data Protection Regulation) unequivocally states that this kind of personal data collection and user tracking is not permitted to override the “fundamental rights and freedoms of the data subject, including privacy” and is, we believe, not permitted by the new European regulation (Bill Budington, Bennett Cyphers, Alan Toner, and Jeremy Gillula, Electronic Freedom Foundation, 12/22/18). The high courts will validate this over time.

Further complicating the matter is the terms of service on data-centric technology platforms such as Facebook, Twitter, LinkedIn, WordPress, Instagram, Amazon, etc. Their business models require considerable data sharing with third and fourth-party business entities, who gather elements of specific user data and then combine them with other browser and device fingerprinting data elements, thus completing the dataset. All the while the data subject and interconnected entities are mostly clueless. This further complicates compliance, erodes privacy, but is great for marketers — many people appreciate that Amazon correctly suggests what they often desire. Yet that is not always a good thing because this starts to precondition a person or a culture to norms at the expense of originality. In the past we saw tobacco companies do this unethically targeting young people, and there are more examples — think for yourself.

This begs the question of who owns these datasets and at what point in their semblance, where are they stored, how are they protected, and to what extent can informed consumers opt out if practicable — observing there is be some incidental data collection that has business protection. This paradox spurs competition and the growth of privacy enhancing technologies (PETs). Existing PETs include communication anonymizers, shared bogus online accounts, obfuscation tools, two or three-factor authentication, VPNs (virtual private networks), I.P. address rotation, enhanced privacy ID (EPID), and digital signature algorithms (encryption) which support anonymity in that each user has unique public verification key and a unique private signature key. Often these PETs are more useful when used with a fake account or server (honeynet). This attempts to divert and frustrate a potential intruder but gives the defender valuable intelligence.

Fig. 4. VPN Data Flow Diagram, Stock, 2018.

Opera, Tor and Firefox are leading secure browsers but there is an opportunity for better security and privacy plugins from the Chrome (Google) browser, while VPN (Virtual Private Network) technologies should be used at the same time for added privacy. These technologies are designed to limit tracking and correlation of users’ interactions with third-party entities. Limited-disclosure (LD) often uses cryptographic-techniques (CT) which allows users to retrieve only data that is vetted by providers, for which the transmitted data to the third party is trusted and verified.

3. Artificial Intelligence Will Grow on The SMB (Small and Medium Business) and Individual Market

In the past artificial intelligence (AI) has been primarily the plaything of big tech companies like Amazon, Baidu, Microsoft, Oracle, Google, and some well-funded cybersecurity startups like Cylance. Yet for many other companies and sects of the economy, these AI systems have been too expensive and too difficult to roll out effectively. Heck, even machine learning and big data analytics systems can be cost and time prohibitive for some sects of the economy, and for sure the individual market in prior years. However, we feel the democratizing of cloud-based AI and machine learning tools will make AI tools more accessible to the SMB and individual market.

Fig. 5. Open Source TensorFlow Math AI, Google, 2018.

At present, Amazon dominates cloud AI with its AWS (Amazon Web Services) subsidiary. Google is challenging that with TensorFlow, an open-source AI library that can be used to build other machine-learning software. TensorFlow was the Machine Learning behind suggested Gmail smart replies. Recently Google announced their Cloud AutoML, a suite of pre-trained systems that could make AI easier to use (Kyle Wiggers, Venture Beat, 07/28/18). Additionally, “Google announced Contact Center AI, a machine learning-powered customer representative built with Google’s Dialogflow package that interacts with callers over the phone. Contact Center AI, when deployed, fields incoming calls and uses sophisticated natural language processing to suggest solutions to common problems. If the virtual agent can’t solve the caller’s issue, it hands him or her off to a human agent — a feature Google labels “agent assist” — and presents the agent with information relevant to the call at hand” (Kyle Wiggers, Venture Beat, 07/28/18). 

The above contact center AI and chatbots can both be applied successfully to personal use cases such as medical triaging, travel assistance, self-harm prevention, translation, training, and improved personal service. Cloud platforms and AI construction tools like the open source TensorFlow will enable SMBs to optimize insurance prices, model designs, diagnosis and treat eye conditions, and build intelligence contact center personas and chatbots, and much more as technology evolves in 2019.

2. Useful Big Data Will Make or Break Organizational Competitiveness

Developed economies increasingly use big data-intensive technologies for everything from healthcare decisioning to geolocation to power consumption, and soon the world will to. From traffic patterns, to music downloads to web service application histories and medical data. It is all stored and analyzed to enable technology and services. Big data use has increased the demand for information management companies such as, Oracle, Software AG, IBM, Microsoft, Salesforce, SAP, HP, and Dell-EMC — who themselves have spent billions on software tools and buying startups to fill their own considerable big data analytics gaps.

Fig. 6. Big Data Venn Diagram, Stock, 2018.

For an organization to be competitive and to ensure their future survival a “must have big data goal” should be established to handle the complexity of the ever-increasing massive volume of both, structured (rows and table) and unstructured (images and blobs) data. In most enterprise organizations, the volume of data is too big, or it moves too fast or it exceeds current processing capacity. Moreover, the explosive growth of the Internet of Things (IoT) devices provides new data, APIs, plugins/tools, and thus complexity and ambiguity.

We know there are open source tools that will likely improve reliability in big data, AI, service, and security contexts in 2019. For example, Apache Hadoop is well-known for its capabilities for huge-scale data processing. Its open source big data framework can run on-prem or in the cloud and has very low hardware requirements (Vladimir Fedak, Towards Data Science, 08/29/18). Apache Cassandra is another big data tool born out of Facebook around 2010. It can process structured data sets distributed across a huge number of nodes across the world. It works well under heavy workloads due to its architecture without single points of failure and boasts unique capabilities no other NoSQL or relational database has. Additionally it features, great liner scalability, simplicity of operations due to a simple query language used, constant replication across nodes, and more (Vladimir Fedak, Towards Data Science, 08/29/18).

For 2019 organizations should consider big data a mainstream quality business practice. They should utilize and research new tools and models to improve their big data use and applications — creating a center of excellence without being married to buzzwords or overly weak certifications that all too often squash disruptive solutioning. Lastly, these centers of excellence need to be dominated not by the traditional IT director overloads. Rather, the real people between the cracks who know more and have more creative ideas than these directors who often build yes clichés around themselves and who are often not the most qualified — great ideas and real leaders defy title.

1. Election Disinformation and Weak U.S. Polling Systems Harms Business and Must Be Fixed

The intersection of U.S. politics and media can be at times nasty, petty, selfish, or worse outright lies and dirty smear campaigns under shadow proxies who skirt campaign finance laws by being either a policy advocacy group – non-political, or worse yet, a foreign-sponsored clandestine intelligence agency of an enemy to the nation whose only rule is to disrupt U.S. elections. Perhaps Russian, North Korea, or even China affiliated groups.

Innovations in big data and social media, browser proxies and fiber optic cable, 5G, in conjunction with the antiquated and insecure U.S. polling system, makes election news and security complicated, fragile and highly important. At present, there are few people and technology companies that can help resolve this dilemma. For a state-sponsored hacker group altering a U.S. election is the ultimate power play.

Respect for all parties is a must and disinformation of any type should not be tolerated. Universities, think tanks, startups, government, and large companies need to put time and money into experimenting as to how we can reduce disinformation and better secure the polling systems. The first step is public awareness and education on checking purported news sources, especially those from digital media. The second step is more frequent enforcement of slander laws and policies. Lastly, we should hold technology companies to high media ethics standards and should write to their leaders when they violate them. 

As for securing the polling systems, multi-factor authentication should be used, and voting should be done digitally via secure encrypted keys. If Amazon can securely track the world’s purchases of millions of products with way more data and complexity, and with service a moon shot better than your local state DMV (driver and motor vehicle) office, than the paper ballot and OCR (Optical Character Recognition) scanners need to go. There are many Android and iOS applications that are more secure, faster, and easier to use than the current U.S. polling system and they are doing more complex things and with more data that is changing at an exponentially faster rate. They were also made for less money. Shame on the U.S. OCR election system.

Business should not be afraid to talk about this, because, like a poisonous malware, it will spread and be used to easily run businesses out of business – often due to greed and/or petty personal differences. Examples of this include hundreds or thousands of fraudulent negative Yelp reviews, driving a competitor’s search rankings down or to a malicious site, redirecting their 1-800 number to a travel scam hotline, spreading false rumors, cyber-squatting, and more. Let 2019 be the year we stand to innovate via disruptive technologies for a more ethical economy.

About the Authors:

Fig. 7. Swenson and Mebratu.

Jeremy Swenson, MBA, MSST & Angish Mebratu, MBA meet in graduate business school where they collaborated on global business projects concerning leadership, team dynamics, and strategic innovation. They also worked together at Optum / UHG. Mr. Swenson is a seasoned (14 years) IT consultant, writer, and speaker in business analysis, project management, cyber-security, process improvement, leadership, music, and abstract thinking. Over 15 years Mr. Mebrahtu has worked with various fortune 500 companies including Accenture and Thomson Reuters, and he is currently principal quality engineer/manager at UnitedHealthcare. He is also an expert in software quality assurance, cybersecurity technologies, and design and architecture of technology frames.