Top 10 Indicators of Compromise (IoC) in OT Environments
For too long, the Operational Technology (OT) world-the domain of Industrial Control Systems (ICS), Supervisory Control and Data Acquisition (SCADA), Programmable Logic Controllers (PLCs), and Industrial Internet of Things (IIoT)-operated under the comforting, yet increasingly false, premise of an “air-gap.” In today’s hyper-connected, IT/OT converged landscape, that air-gap is a myth. The threat actors, from financially motivated ransomware gangs to sophisticated Nation-State adversaries, have shifted their focus, recognizing that disrupting a manufacturing line or a power grid can yield far greater impact than merely stealing customer data.
The fundamental challenge in OT security is that an attacker’s ultimate goal is rarely data exfiltration; it is disruption, manipulation, or destruction of the physical process. This means the Indicators of Compromise (IoCs) we hunt for in an industrial environment must be different, focusing less on traditional IT artifacts and more on deviations in operational behavior.
This guide, written for security professionals operating at the nexus of IT and OT, breaks down the ten most critical and current IoCs you must be monitoring to stay ahead of the industrial cyber threat.
The Paradigm Shift: Why OT IoCs are Unique
Before diving into the top ten, it’s essential to understand the difference between an IoC in a standard IT network and one in an OT/ICS environment.
An Indicator of Compromise (IoC) is a piece of forensic data-like a hash, IP address, or registry key-that points to a security breach. In IT, IoCs often involve:
- Unusual inbound/outbound HTTPS traffic.
- Spam emails or phishing attempts.
- The presence of a known malware file hash.
In OT, however, the target is the cyber-physical system. The security goals prioritize Availability and Integrity over Confidentiality (the CIA Triad is often re-prioritized in OT to AIC). Therefore, OT IoCs focus on deviations from the physical process’s normal state.
A key difference is the concept of Behavioral IoCs versus Atomic IoCs. While atomic IoCs (like a malicious IP address) are still useful, the most potent detection in OT comes from recognizing an attacker’s Tactics, Techniques, and Procedures (TTPs), which manifest as behavioral anomalies in the industrial process itself.
1. Unexpected Protocol Commands or Messages
This is arguably the most critical and specific OT IoC. Industrial networks rely on specific, often non-routable, protocols like Modbus/TCP, EtherNet/IP, DNP3, and OPC UA. Attackers who have gained access will use these protocols to perform reconnaissance, modify logic, or directly send commands to PLCs or Remote Terminal Units (RTUs).
What to Look For:
- Unusual Function Codes: A sudden, unauthorized use of a write command (e.g., Modbus function code 6 or 16) when the device typically only performs read operations.
- Out-of-Sequence Commands: A sequence of commands that violates the PLC’s operational logic or the known steps of the industrial process (e.g., opening a valve before the pump is activated).
- Vendor-Specific Protocol Anomalies: Communication packets with unusual lengths, malformed fields, or unexpected values for vendor-proprietary protocols.
- Non-HMI/Engineering Workstation Source: Valid protocol traffic originating from a non-authorized or new IP address, especially a jump box or system outside the control segment, indicating lateral movement.
2. Unauthorized Configuration or Logic Changes on PLCs/Controllers
The final objective of many OT attacks, particularly those from nation-state actors (e.g., TRITON, Industroyer), is to tamper with the control logic of the PLCs. This is how they cause physical damage or disruption. This change is a high-fidelity IoC.
What to Look For:
- Sudden Configuration Updates: Unscheduled or unauthorized firmware updates, logic program uploads/downloads, or changes to PLC settings.
- Version Mismatch: The version number of the active PLC program suddenly differs from the last verified, golden image in the secure repository.
- Registry/File Changes on HMI/SCADA Servers: Modification of critical system files, registry keys, or the installation of new services on Human-Machine Interface (HMI) or SCADA servers, often used for persistence or data staging.
3. Deviations in Process Variable Baselines
This IoC is about the physical world’s response to a cyber intrusion. Successful attackers will manipulate control parameters-like temperatures, pressures, or motor speeds-to cause a physical effect. A robust OT security system should understand the normal operating parameters.
What to Look For:
- Sensor/Actuator Mismatch: A command is sent to an actuator (e.g., “close valve”), but the associated sensor reports a different, uncommanded state (e.g., “valve remains open”). This could indicate a direct manipulation of the control command or sensor data.
- Out-of-Band Process Values: Sustained or sudden spikes/drops in critical process variables (flow rate, RPM, pressure) that exceed pre-defined, acceptable operational limits without a corresponding manual operator action or scheduled event.
- High PID Controller Changes: Rapid, uncharacteristic adjustments to Proportional-Integral-Derivative (PID) loop parameters, which attackers use to destabilize a process.
4. Unusual Outbound Network Activity (C2 Communication)
While the objective isn’t always data theft, attackers still need Command and Control (C2) communication. Because OT networks often have strict communication boundaries, any unexpected outbound connection is a massive red flag.
What to Look For:
- Connections to External IP Addresses: An ICS device (like an Engineering Workstation or historian server) attempting to connect to an external, unknown, or known-malicious IP address, often over a non-standard port or a legitimate-looking port like 443 (HTTPS) to blend in.
- DNS Request Anomalies: Excessive or highly unusual DNS queries, especially those pointing to newly registered domains or domains associated with known malware families (DNS tunneling is a common exfiltration tactic).
- Lateral Movement Traffic: Discovery scans or connection attempts (e.g., RDP, SMB) between OT devices that have no legitimate operational need to communicate.
5. Anomalous User and Privileged Account Activity
Compromised credentials remain the number one vector for lateral movement across the IT/OT boundary. Threat actors will leverage stolen IT credentials to jump onto the OT network.
What to Look For:
- Login Geolocation and Time Irregularities: A technician’s account logging in from a foreign country or at 3 AM when they are known to work day shifts and locally.
- Multiple Failed Login Attempts: A sudden, high volume of failed logins to critical systems (HMI, historian database, Active Directory within OT) indicating a brute-force or password-spraying attack.
- Excessive Privilege Escalation: An account suddenly requesting or being granted highly privileged access (e.g., adding a user to the PLC programmer group) that is outside of their normal job function.
- Simultaneous Logins: A single user account logging in from two different, disparate locations or systems simultaneously.
6. Unexpected File and Data Migration
Although OT attacks focus on disruption, the initial stages often involve staging data, dropping malware, or gathering intelligence-all of which involve file activity. Ransomware actors, in particular, will stage files before encryption.
What to Look For:
- Unusual File Creation/Access: The appearance of unknown executable files (.exe, .dll) in unusual directories (like a temporary folder on an HMI) or suspicious batch scripts (.bat, .ps1).
- Data Staging: A large, sudden increase in file write volume on historian or data aggregation servers, especially if data is being compressed (.zip, .rar) or moved to an unusual outbound staging directory.
- Malicious File Hashes: Detection of file hashes (e.g., SHA-256) known to be associated with common ICS-targeting malware families (e.g., Industroyer, Triton, or specialized ransomware variants).
7. Time Synchronization Irregularities (NTP/PTP)
Attackers often tamper with system clocks or network time protocol (NTP) settings on controllers or servers to evade detection, manipulate time-based automation logic, or confuse forensic analysis efforts.
What to Look For:
- Drift in Controller Clocks: A sudden, significant difference in the recorded time between an ICS controller (PLC/RTU) and the central time server (NTP/PTP).
- NTP Server Changes: Unauthorized changes to the designated NTP source for critical OT devices.
- Log Time Inconsistencies: Log entries across different systems that show inconsistent timestamps for related events, suggesting manual clock manipulation.
8. Tampering with Security and Monitoring Tools
A successful attack often requires blinding the operator or the security team. Attackers will target endpoint detection and response (EDR) agents, firewalls, and logging systems.
What to Look For:
- Disabling Security Agents: Log entries indicating that an antivirus, EDR, or host-based firewall service has been suddenly disabled, stopped, or had its configuration modified.
- Log Deletion or Anomalous Volume: Mass deletion of system, security, or application logs, or alternatively, a sudden massive influx of garbage log entries intended to mask critical events.
- Firewall Rule Changes: Unauthorized modification or addition of firewall rules on the OT network boundary or internal segmentation firewalls, especially rules that permit new outbound connections.
9. Unauthorized Network Scans and Discovery Attempts
Before an attack, adversaries spend time mapping the network, identifying control devices, and checking for vulnerabilities. This reconnaissance phase leaves distinct network IoCs.
What to Look For:
- Excessive ARP or Broadcast Traffic: A sudden surge in broadcast or Address Resolution Protocol (ARP) requests from a single device, indicating a host performing network discovery (often an attacker preparing for lateral movement).
- Protocol Port Scanning: Attempts to connect to a wide range of ports on multiple devices in a sequential or random manner. This is often an attacker looking for open services like RDP, SSH, or management ports like 502 (Modbus/TCP).
- New Listening Ports: The appearance of a new or unusual listening port on a critical server or controller, which may be a backdoor or a C2 communication channel.
10. System Restarts and Unscheduled Downtime
The ultimate, and most obvious, IoC is an unexpected physical impact. Attackers may force a system crash or shutdown to mask their activity, delay response, or simply achieve a denial-of-service (DoS) effect.
What to Look For:
- Unscheduled PLC Stop State: A critical controller unexpectedly switching from a RUN state to a STOP or PROGRAM state. This is an extremely high-priority alert.
- Process Interruption Without Operator Action: The industrial process unexpectedly shutting down or entering an emergency state without a corresponding manual action logged by an operator.
- High CPU/Resource Utilization: Abnormal and sustained spikes in CPU or memory usage on HMI or SCADA servers, which could indicate a running resource-intensive malware, a DoS attack, or an exfiltration process.
The Proactive Stance: From IoC to IoA
While IoCs are essential-they confirm a compromise has already occurred-the future of industrial cybersecurity relies on moving toward Indicators of Attack (IoAs).
IoAs focus on the attacker’s TTPs (Tactics, Techniques, and Procedures). By detecting the behavior (e.g., an unauthorized account attempting to modify a PLC), you can stop the attack before the damage (the IoC) is registered.
Strategic Detection and Defense:
- Network Segmentation: The single most effective countermeasure. Isolate your critical OT network zones (Level 0, 1, 2) from the IT network (Level 3-5). This limits lateral movement and makes IoCs easier to spot.
- Deep Packet Inspection (DPI): Utilize OT-aware network monitoring tools that can perform DPI on industrial protocols. These tools can parse a Modbus packet and alert on an anomalous function code (IoC #1) rather than just flagging general traffic.
- Asset Inventory and Baseline: You cannot detect an anomaly if you don’t know what “normal” looks like. Maintain a complete, accurate, and continuously updated inventory of all OT devices, their firmware, open ports, and established communication baselines. This is the foundation for detecting IoCs #2, #3, and #9.
- Principle of Least Privilege: Enforce strict access control. Use Multi-Factor Authentication (MFA) for all remote access and privileged accounts (IoC #5). Restrict engineering workstations to only communicate with the PLCs they manage.
The industrial world is under siege. By shifting our focus to these specific, high-value Indicators of Compromise, especially those rooted in the operational reality of ICS protocols and physics, we transform our industrial defenses from a reactive effort into a proactive, process-aware security posture. The safety and reliability of our critical infrastructure depend on it.
