Halloween is in October and in the spirit of the holiday we’re focusing in on the need to protect organizations from legacy vulnerabilities (the “skeletons”), as well as from the “Frankenstein” dangers of biased artificial intelligence. These systemic challenges must be prioritized with careful risk assessments to protect both critical infrastructure and accurate decision-making in a world exploding with technology and data.
In the case of ☠ skeletons, many organizations are dealing with cyber risks in legacy systems, encryption, and data. For example, Claroty’s Team82 researchers discovered a “method to extract private encryption keys from Siemens industrial devices and then compromise whole Siemens product lines.”
And a report from Finnish company WithSecure found that older methods of encryption in Microsoft products create vulnerabilities. The researchers identified that, “The [Office 365 Message Encryption] messages are encrypted in insecure Electronic Codebook (ECB) mode of operation …. [and] since Microsoft has no plans to fix this vulnerability the only mitigation is to avoid using Microsoft Office 365 Message Encryption.”
Legacy supply chain risks are yet another skeleton in the closet. The article “Supply Chain Attacks Increased Over 600% This Year and Companies Are Falling Behind,” says “most companies believe they are using no open-source software libraries with known vulnerabilities, but new research finds them in 68% of selected enterprise applications.”
In addition, unfixed Log4j vulnerabilities are still present in 35% of systems, and “the Log4Shell vulnerability originated in a Java class called JndiManager that is part of Log4j-core, but which has also been borrowed by 783 other projects and is now found in over 19,000 software components.”
Just as open-source and other vulnerabilities can be ghostly risks invisibly hiding in systems, critical infrastructure faces the challenge of securing older legacy operational technology (OT) that wasn’t built for modern security. The article, “The Steps to Securing Operational Technology in the Government Sector,” discusses how government agencies operate the widest range of OT in the nation, ranging from hydroelectric power systems at the Hoover Dam to medical equipment at VA hospitals.
The author states, “Most ICS and OT networks have antiquated computing systems that weren’t designed to protect critical information. Having traditionally been isolated from IT systems, they may still be running operating systems as old as Windows XP and other software programs that are no longer supported and can’t be patched or upgraded.” The challenge then is how to protect vital systems that are exposed to merging IT and OT and a surge in Internet of Things (IoT) devices that connect to these systems.
Legacy files may also create 💀 ROT (Redundant, Obsolete, Trivial) data that’s “rotting” away in your data stores. When employees leave the company, their data remains may include problems like password files, duplicated data, or overshared sensitive data. Without regularly monitoring your data estate, it’s difficult to implement proper microsegmentation and user access controls.
Zero Trust is the security framework recommended to keep the bogeymen out and your data and systems safe. However to implement Zero Trust, a risk assessment of your assets is necessary. A recent U.S. Government Accountability Office blog discussed its report on cybersecurity risks to the U.S. electric grid and recommends that a complete assessment of risks is a necessary starting point using a guide like the National Institute of Standards and Technology (NIST) Cybersecurity Framework.
Critical infrastructure in today’s world must identify crucial controls and data, and ensure that they are properly segmented to only provide access to authorized and trusted individuals with well-protected credentials. With hackers using automation and botnets to attack your fortress like zombies looking for a window left open, it’s imperative that organizations use guides like NIST’s Integrating Cybersecurity and Enterprise Risk Management (ERM) to collaborate internally and develop an accurate risk profile.
The U.S. government is working to get a handle on this problem, and Deputy National Security Advisor Anne Neuberger said the administration is preparing to activate its regulatory authorities at four agencies, starting with transportation, and followed by communications, water, and healthcare.
The reality is legacy systems aren’t all headed to the technology graveyard just yet. Risk assessments are imperative and must include an inventory of assets like data to know how to prioritize and mitigate those risks. If you can’t do away with the legacy systems, then it’s important to understand what is contained within them so you can better protect the data and user access to critical operational controls.
Dr. Chase Cunningham – who worked for 20 years in Cyber Forensic and Analytic Operations for the NSA, CIA, and FBI, as well as Forrester Research – points out how humans often repeat mistakes when we don’t honestly examine our strategies and assess risk. His recent LinkedIn article, “Simple Points on Zero Trust and ZTNA Versus the VPN.” says:
“We have known since 1260 B.C. that a perimeter-based model of security was destined to fail. That was when the city of Troy fell … something interesting was seen “outside of the wall,” and the trusted interior operators (the soldiers) went out and quickly inspected that item and brought it past the wall.
Then the malicious internal threats dropped from the belly of the beast and moved laterally and burned the city to the ground, ultimately taking the city … Then we collectively took that failure, digitized it, sped it up, and dispersed it with billions of unknown assets in a threatening battle space (cyberspace) and expected that system to work.”
Dr. Cunningham stresses several basic Zero Trust steps that need to be taken to help make those legacy skeleton threats less scary, including “providing no access unless an asset – an application, data, or service – is expressly permitted for that user.”
However, the need to assign access by user requires that you first understand what needs protecting. This will require an ongoing inventory of devices and data so that user controls can be properly implemented. Zero Trust guides from the Cybersecurity & Infrastructure Security Agency (CISA) and the National Institute of Standards and Technology (NIST) emphasize the need for asset inventory and risk assessment.
Frankenstein AI Bias and Intelligent Automation
To continue the Halloween theme, let’s also take a look at how artificial intelligence is a helpful tool, but also poses some Frankenstein-like risks depending on the type of AI. There has been growing exploration into the uses of AI, and one creative form of AI – Generative AI – has become popular for image creation. In fact, the 🎃 Halloween-themed image at the top of this article was created with a text-to-image prompt using Stability AI’s DreamStudio Lite.
With humans prone to both cybersecurity and data processing errors, AI is also being developed to help improve the management of data and detect errors and cyberattacks. It is important to note the different types of intelligent automation, artificial intelligence, and machine learning. Currently, there are valuable and efficient uses of human-in-the-loop intelligent automation that allow humans to focus on data strategy, oversight, and creative insight while computers do the tedious processing.
With the increase in cyberattacks and shortage of staff, finding ways to free up staff for strategic tasks has never been more important. In the article, “Care and Feeding of the SOC’s Most Powerful Tool: Your Brain,” a cybersecurity pro discusses cognitive overload and how it contributes to decreased security team performance and mistakes.
Intelligent data processing and cybersecurity automation have the potential to save resource time and budget, while also saving humans from cognitive overload. A report from the U.S. Chamber of Commerce stresses the environmental and consumer need for a digital government services modernization because collecting and processing paper forms costs the federal government over $38.7 billion. Digitization can lead to more effective Data Mining where organizations can analyze large amounts of data to become more efficient or profitable.
However, there are valid concerns about how human bias can negatively impact more advanced artificial intelligence, sometimes called Black Box AI. This risk should be given attention because the decision-making process is not transparent. Several recent articles outline these Frankenstein-gone-wrong concerns.
One of these articles is about The 2022 State of AI report, which outlines how two AI investors have been publishing their annual State of AI report to share their AI knowledge with the world. The report highlights safety concerns and cites a recent survey of the machine learning (ML) community where 69% believe AI safety should be prioritized more than it currently is.
Another report, Deloitte’s State of AI Report 2022, found that “94% of business leaders agree that AI is critical to success over the next five years,” however, “high-outcome organizations are more likely to have a documented process for governance and quality of data put into AI models, and use an AI quality and risk management process and framework to assess AI model bias…”
AI is also being focused on in government. Chakib Chraibi, the National Technical Information Service’s Chief Data Scientist at the Commerce Department, recently said “AI has a crucial role for federal agencies, if they are able to implement it effectively. That means creating responsible guardrails like privacy, transparency and fairness in the use of AI.”
The programming problems with Black Box AI stem from the unconscious bias of its creators. The article, “What is Unconscious Bias? How Does it Affect You?” defines bias “as social stereotypes about certain groups of people or demographics that are formed outside of one’s personal conscious awareness.” And since the computer algorithms that are needed to enable machine learning are created by people, their implicit biases can seep into the programming.
Also, the author explains that “if bias were to be completely removed from the outset of model development, bias can still be present in word embedding. As word libraries such as Word2Vec by Google are created with human inputs, their bias can seep into the words and word associations of these libraries.”
Below are a few of our recent articles on tackling the challenges of human bias and the benefits of using intelligent automation for risk assessment and data quality control:
- Building a cybersecurity foundation with critical thinking, storytelling, and asset inventory
- Three ways to reduce cybersecurity risk: bias education, collaboration, and intelligent automation
- Cybersecurity awareness benefits from data discovery automation
- The root causes of cybersecurity risk and how automation can help
- Cyber resilience requires managing human risks and leveraging innovation and automation
One of the primary issues of human bias is that if we still don’t fully understand cognition and how the human brain works, how can we assume a Black Box AI will perform as expected? Are we risking a monstrous outcome like in Mary Shelley’s book Frankenstein?
Research into human cognition has gone on for eons trying to learn the best approaches to “thinking” for individual and collective well-being. A recent theory by Robert Epstein illustrates the problems of trying to create a computer that mimics human decision-making. Epstein has a Harvard University doctoral degree and is an author and senior research psychologist at the American Institute for Behavioral Research and Technology, as well as having served as former editor-in-chief of Psychology Today magazine.
In Epstein’s article, “Your Brain Is Not a Computer. It Is a Transducer,” he takes a look at both philosophical and paranormal ideas raised by historic figures like William James (1842-1910), the “Father of American Psychology,” and suggests we abandon metaphors of the brain being similar to an information-processing device and consider that it acts more like a transducer (a microphone acts like a transducer). He suggests our bodies and brain may work sort of like an antenna that focuses and interprets input received from a greater field of creative consciousness.
Epstein states, “The main reason we should give serious thought to such a theory has nothing to do with ghosts. It has to do with the sorry state of brain science and its reliance on the computer metaphor.” Epstein gives an example of a Daniel Barenboim, a piano virtuoso and conductor who memorized all thirty-two of Beethoven’s sonatas by the time he was 17.
Regarding creative feats like this, he says, “if you study his brain for a hundred years, you will never find a single note, a single musical score, a single instruction for how to move his fingers – not even a “representation” of any of those things. The brain is simply not a storage device. It is an extraordinary entity for sure, but not because it stores or processes information …” Epstein further ponders that neural transduction might explain Carl Jung’s concept of the “collective unconscious” and why during creative pursuits the passage of time seems to stop when we tune out our surroundings.
Our articles above outline how biases in human thinking create errors when we rely on past habits and “accepted wisdom” without using critical thinking, collaboration, and feedback to freshly assess goals, problems, priorities, and solutions in the present. Black Box AI, with its far-reaching implications, should not be treated just like any other useful technology. It requires a risk assessment of its unusual and unique ingredients, and its potential impact, to determine its true value and safety. As Victor Frankenstein learned in Mary Shelley’s book – just because you can build something, should you?
Dispel Spooky Risks with Current-State Assessment
The skeletons (legacy data and systems) and Frankensteins (Black Box AI) must receive ongoing assessment and monitoring in the present rather than relying on past risk profiles and assumptions. These risks all create uncertainty within the system through a lack of built-in and layered controls, misunderstanding how employees use and interact with technology, or by assuming programmers or training data are infallible when substantial bias can affect AI algorithms.
Comprehensive, up-to-date inventories and risk assessments are crucial so that governments and organizations make informed risk management decisions that prioritize protections for current assets and technology solutions.
However, many forms of intelligent automation and artificial intelligence perform simpler functions that require less bias scrutiny. Intelligent data solutions often use transparent AI or human-in-the-loop training and oversight to help process data and reduce manual work so people can focus on more high-level, creative tasks.
Intelligent automation can efficiently identify and aggregate data, recognize patterns, or help find outliers and errors. This type of automation is valuable for tedious tasks, data monitoring, automating Security Operations Center tasks, and digitally transforming organizations by ingesting information for data mining. Data analytics and artificial intelligence are also dependent on using quality data inputs, and intelligent automation can help verify data lineage and source, as well as monitor data for changes.
The objective is to scare away the unknowns in the data and technology ecosystem to attain a more accurate understanding of cybersecurity risks and data quality. Access to mission-critical infrastructure controls and data should be clearly restricted. Leadership must investigate, communicate, and collaborate for oversight of Black Box AI. And visibility and management of the data estate should also be simplified to enable real-time indexing, queries, and data monitoring so that analytic patterns can be tested and trusted.
Anacomp’s AI/ML Data Discovery and intelligent document processing solutions automate multiple functions including data ingestion, data inventory, risk assessment and monitoring, digital transformation, and data processing for many use cases in cybersecurity, cloud and data migrations, mergers and acquisitions, intellectual property protection, and analytics projects.
You can see what risks might be lurking in your data estate by testing out data discovery on your own data with a free 1 TB Test Drive of Anacomp’s D3 AI/ML Data Discovery & Distillation Solution.
Anacomp has served the U.S. government, military, and Fortune 500 companies with data visibility, digital transformation, and OCR intelligent document processing projects for over 50 years.