CTI Fundamentals: What the Field Actually Requires

Most organizations have a threat feed subscription. Almost none have a threat intelligence program.

April 14, 2026

Cyber threat intelligence is one of those fields where the terminology has been industrialized faster than the practice. Every vendor sells "threat intelligence." Every SOC claims to do it. What most of them actually have is a list of bad IPs that gets auto-blocked at the firewall and a dashboard that turns green when the list updates. That is not intelligence. That is a subscription service dressed up in intelligence language.

The distinction matters because what you call the thing determines how you build it. If CTI is a feed, you buy a feed and you're done. If CTI is a discipline, meaning a structured process for turning raw data into decisions, then the feed is just collection, and you have five more phases of work ahead of you. Most organizations have stopped at collection and declared victory. This post is about everything they skipped.

The four levels, and why three of them get ignored

CTI splits into four levels based on audience and time horizon. Strategic is non-technical intelligence for executives and decision-makers: cyber trends, threat actor motives, geopolitical context, business risk. Tactical focuses on adversary TTPs, tactics, techniques, and procedures, typically mapped to MITRE ATT&CK, consumed by security architects and blue teams who need to understand how adversaries operate. Operational covers specific ongoing campaigns, covering the who, what, and when of an active threat, consumed by SOC and IR teams during a hunt or incident. Technical is the most granular: raw IOCs like malicious IPs, domains, file hashes, and URLs, ingested directly into SIEMs, firewalls, SOAR platforms, and EDRs.

In practice, most organizations only ever operate at the technical level. They ingest IOC feeds, block the indicators, and call it done. This is backwards. Technical intelligence has the shortest shelf life of the four. IOCs age out in days to weeks. An IP that hosted a C2 server last month may be hosting legitimate infrastructure today. Blocking stale IOCs generates noise. Chasing fresh ones without context generates more noise. The levels above technical exist precisely because indicators without adversary context are just trivia.

Strategic intelligence is where most programs have zero presence at all. Leadership is making risk decisions on budget allocation, security architecture, and vendor selection, with no informed picture of who is actually targeting the organization. That is not a technology problem. It is a process failure.

MITRE ATT&CK: what it is and what it is not

ATT&CK is a knowledge base of adversary tactics and techniques derived from real-world intrusion observations. It is not a checklist. It is not a compliance framework. It is a shared vocabulary for describing how attacks happen at the behavioral level, which makes it useful for detection engineering, threat hunting, red team planning, and CTI analysis, as long as you understand what the terms mean.

Tactics are the why. A tactic is an adversary's objective at a given stage of an intrusion: gain Initial Access, establish Persistence, escalate Privilege, move Laterally, exfiltrate data. There are fourteen of them in the Enterprise matrix. They are the columns. Techniques are the how. A technique is the specific method an adversary uses to accomplish a tactic. One tactic has multiple techniques. Credential Access, for example, has techniques like OS Credential Dumping, Brute Force, and Steal Web Session Cookie. Sub-techniques go one level deeper: OS Credential Dumping has sub-techniques including LSASS Memory, SAM, DCSync, and others, each with its own detection logic and adversary prevalence data.

The value is in the hierarchy. An alert that tells you "suspicious process injection" is less useful than one that maps to T1055.002 (Process Injection: Portable Executable Injection), because the specific technique tells you what the adversary is trying to accomplish, which actors use it, and what they typically do next. A detection without an ATT&CK mapping is a detection you can't reason about at the intelligence level. A mature detection engineering program knows exactly which techniques have coverage and which are blind spots. Most programs don't know either.

IOCs versus IOAs: the difference between looking backward and looking forward

An Indicator of Compromise is a forensic artifact that tells you a system has already been compromised. These include malicious file hashes, known C2 IP addresses, registry keys dropped by malware, and suspicious file paths. They are reactive and evidence-based. You find them during or after an incident. They answer the question: was this system hit?

An Indicator of Attack focuses on behavior and intent in real time, independent of the specific tools or malware used. IOAs are proactive. They look for what an attacker is doing, not what they left behind. A process spawning an unexpected child process, like Word launching PowerShell. Credential dumping behavior. Lateral movement patterns. Unusual network reconnaissance. These are IOAs.

The practical implication is significant. An attacker who uses a novel malware sample, a legitimate admin tool, or a living-off-the-land technique will evade every IOC-based detection you have. The hash doesn't match anything in your threat feed. The IP isn't on any blocklist. Your SIEM doesn't fire. IOA-based detection catches the behavior regardless of the tool used to produce it. This is why mature EDR platforms have shifted toward behavioral detection, and why TTP-based detection built from ATT&CK mappings is more durable than indicator-based blocking. IOCs expire. Adversary behavior patterns persist for years.

APT groups: what separates them from commodity criminals

APT stands for Advanced Persistent Threat. The name is accurate on all three counts, which is unusual for security terminology. APT28, also known as Fancy Bear, is Russian military intelligence, specifically GRU Unit 26165. APT41, also called Double Dragon, operates under the Chinese Ministry of State Security. These are not criminal organizations running ransomware for profit. They are intelligence services conducting cyber operations in service of national objectives: espionage, sabotage, intellectual property theft, geopolitical positioning.

The key distinctions from commodity cybercriminals come down to motivation, dwell time, and capability. A commodity criminal wants financial gain and is in and out in days to weeks. An APT group may maintain persistent access to a target network for months to years without triggering detection. They have nation-state funding, large teams, custom malware, and proprietary implants that don't show up in commodity threat feeds. They are selective in targeting. They do not carpet-bomb industries hoping for easy victims.

This matters for CTI because the response to each is fundamentally different. Against commodity actors, blocking known IOCs and patching CVEs with public exploits provides meaningful protection. Against a nation-state actor with a zero-day budget and a dedicated team researching your specific infrastructure, IOC-based defenses are nearly irrelevant. You need TTP-based detection, deception technology, and threat hunting programs that assume breach. Knowing which category of adversary is actually interested in your organization, something that requires strategic and operational CTI, determines which defensive investments actually reduce risk.

STIX and TAXII: why the industry needed a common language

Before STIX and TAXII, CTI sharing was a mess. Intelligence moved as PDFs, emails, spreadsheets, and ad hoc blog posts. You couldn't automate ingestion. You couldn't correlate across sources. You couldn't feed it into tools without manual transformation at every step. The industry had a lot of people sharing a lot of things that nobody could operationalize at scale.

STIX, Structured Threat Information eXpression, is the language. It provides a standardized schema for expressing threat intelligence as structured objects in JSON. A STIX bundle can contain threat actors, malware, attack patterns, indicators, campaigns, and the relationships between all of them, each as a typed object with defined fields and a unique ID. When a CTI platform exports a campaign report in STIX 2.1, every consuming platform knows exactly how to parse it.

TAXII, Trusted Automated eXchange of Indicator Information, is the transport protocol. It defines how STIX content is published, discovered, and consumed over HTTPS. STIX is the vocabulary; TAXII is the postal system. TAXII 2.1 organizes sharing around two concepts: Collections (repositories of STIX objects a server makes available, similar to a database feed) and Channels (a publish/subscribe model for pushing intelligence to subscribers). TAXII servers expose a REST API that clients can query or publish to, enabling fully automated machine-to-machine CTI exchange without human intermediaries.

The practical implication: if your CTI platform supports TAXII and your SIEM or SOAR supports TAXII, you can have TTP-based intelligence flowing into your detection stack automatically without manual export, transformation, or copy-paste. Most organizations are not doing this. They are still emailing PDFs.

MISP and OpenCTI: complementary, not competing

Both are open-source CTI platforms that mature programs run together. They are not the same thing and they are not interchangeable.

MISP, built by CIRCL and widely adopted by CERTs and ISACs, is event-centric and IOC-sharing-first. Its data model organizes intelligence into discrete events containing indicator attributes, with a powerful correlation engine and federation capability that allows MISP instances to sync across organizations and sharing communities at speed and volume. It is the industry standard for community-level IOC sharing. If you want to participate in an ISAC or exchange indicators with peer organizations, MISP is the expected platform.

OpenCTI, built by Filigran in collaboration with ANSSI, is knowledge-graph-centric and analysis-first. Its native STIX 2.1 data model treats every entity as a graph node, including threat actors, malware, campaigns, and TTPs, and every relationship as a first-class object. This makes it optimized for analyst workflows, ATT&CK mapping, and understanding the why and who behind threats rather than just the what. The key difference is architectural: MISP wraps indicators in event containers, while OpenCTI connects entities through a relational graph where the connections themselves carry analytical meaning.

The practical answer is to run both. MISP handles collection and community sharing. OpenCTI handles enrichment and analysis. Feed MISP into OpenCTI and you get the volume of a sharing community with the analytical depth of a knowledge graph.

The intelligence lifecycle: six phases, two of which almost nobody does

The CTI intelligence lifecycle has six phases: Planning and Direction, Collection, Processing, Analysis, Dissemination, and Feedback. It is iterative and continuous. The output of each cycle informs the next. Here is where organizations actually are versus where they think they are.

Planning and Direction is where intelligence requirements get defined. Priority Intelligence Requirements, or PIRs, answer the question: what does this organization need to know, and why? Without defined PIRs, collection is undirected. You collect everything. You understand nothing. This phase gets skipped constantly because it requires talking to stakeholders and making prioritization decisions, which is harder than just subscribing to a feed.

Collection is the gathering of raw data: threat feeds, OSINT, dark web monitoring, ISAC sharing, internal telemetry, human intelligence. This is the phase most organizations actually do.

Processing is normalization and structuring: parsing, deduplication, enrichment, translation into STIX or platform-native schemas. Most platforms automate large portions of this.

Analysis is where raw data becomes actual intelligence. Analysts apply critical thinking, historical context, and adversary knowledge to produce finished assessments, reports, and recommendations. This is the most important phase and the most commonly mistaken. Automated enrichment is not analysis. An IOC list with reputation scores attached is not a finished intelligence product. Analysis requires a human who understands adversary behavior to produce a so-what assessment: here is what this means for our organization, here is what we expect the adversary to do next, here is what we should do about it.

Dissemination means getting the right intelligence to the right audience in the right format. Technical IOCs go to the SIEM. Operational briefs go to the SOC. Strategic reports go to leadership. Format matters as much as content. A strategic briefing written for analysts will not be read by executives.

Feedback closes the loop. Consumers evaluate whether the intelligence was useful, timely, and actionable. This phase is almost universally neglected. Without feedback, the program never improves. The analysts producing intelligence have no idea whether it drove any decisions.

Most organizations fail at both ends simultaneously. They skip Planning because it requires work that doesn't look like security work. They skip Feedback because it requires organizational discipline to close a loop that nobody owns. The result is a collection operation that produces enriched noise and calls it a CTI program.

The Diamond Model and the Kill Chain: use both, understand each

The Cyber Kill Chain, developed by Lockheed Martin in 2011, models an intrusion as seven sequential phases: Reconnaissance, Weaponization, Delivery, Exploitation, Installation, Command and Control, and Actions on Objectives. Its core insight is correct: an adversary must complete every phase to succeed, so defenders can defeat an attack by disrupting any phase. Earlier is better. It communicates well to non-technical stakeholders and maps reasonably well to defensive control categories.

Its shortcomings are significant. The linear model poorly represents modern adversary behavior, which is frequently non-linear, iterative, and multi-threaded. It reflects 2011-era network perimeter thinking. It handles insider threats, cloud environments, and post-compromise lateral movement awkwardly. Most critically, it describes what happened sequentially but says almost nothing about who did it, why they did it, or how to find them elsewhere in your environment. It is a detection framework. It is not an attribution framework.

The Diamond Model, developed by Caltagirone, Pendergast, and Betz in 2013, takes a different approach. It models every intrusion event as a relationship between four core features: Adversary, Capability, Infrastructure, and Victim, arranged at the points of a diamond with bidirectional relationships between them. The power is in pivoting. Know the malware? Pivot to the infrastructure it communicates with. Know the infrastructure? Find other victims. Find other victims? Profile the adversary. It also incorporates two meta-features, Social-Political (adversary motivation and intent) and Technology (the technical relationship between capability and infrastructure), giving it analytical depth the Kill Chain structurally cannot have.

Each has a specific failure mode. The Kill Chain's linearity and perimeter bias make it a poor fit for complex modern intrusions, and it provides no attribution capability. The Diamond Model can be analytically complex and harder for SOC teams to operationalize at speed. It is a thinking framework more than a detection framework, and it requires meaningful intelligence to populate meaningfully. An empty diamond tells you nothing.

Used together: the Kill Chain structures detection and response workflows. The Diamond Model drives adversary profiling, campaign tracking, and strategic intelligence production. ATT&CK layers on top of both to provide technique-level granularity. That combination is what a mature CTI program actually looks like.

CVSS: what it measures, what it doesn't, and why most organizations are using it wrong

CVSS, the Common Vulnerability Scoring System, produces a number between 0.0 and 10.0 that rates the severity of a software vulnerability. Currently at version 3.1, with version 4.0 recently released. It translates into five severity bands: None, Low, Medium, High, and Critical.

CVSS has three metric groups that almost nobody uses correctly. Base Metrics represent the intrinsic, immutable characteristics of the vulnerability itself, independent of environment or time. They cover exploitability (Attack Vector, Attack Complexity, Privileges Required, User Interaction) and impact (Confidentiality, Integrity, and Availability). This is the score most organizations cite. It is also the only score most organizations ever look at.

Temporal Metrics adjust the Base score based on factors that change over time: whether working exploit code is publicly available, whether an official patch exists, and how well-verified the vulnerability report is. These are genuinely useful signals. They are almost universally ignored.

Environmental Metrics let organizations adjust the score based on their specific infrastructure, re-weighting exploitability and impact factors according to existing controls, and expressing how critical the affected asset actually is. A vulnerability on an internet-facing authentication server should score differently than the same vulnerability on an air-gapped development machine. Environmental scoring is the most powerful of the three groups and the most neglected.

Here is what CVSS was never designed to answer: whether a vulnerability is actually being exploited in the wild. This is a critical dimension. A CVSS 9.8 with no known exploitation and no public proof-of-concept is a materially different risk than a CVSS 7.0 being actively weaponized by ransomware groups today. CVSS doesn't know the difference and doesn't pretend to. Most organizations treat the Base score as a proxy for risk anyway.

CVSS is also asset-agnostic. A 9.8 on a non-production isolated test machine represents far less organizational risk than a 7.0 on your identity provider. CVSS doesn't know which is which. It doesn't account for compensating controls either. The same vulnerability behind three layers of network segmentation is not the same operational risk as that vulnerability on a perimeter-facing system.

The scale problem is the one that makes it nearly useless as a triage mechanism at enterprise scale: approximately 25% of all CVEs score High or Critical. When you have thousands of vulnerabilities competing for limited remediation resources, a rating system that flags a quarter of everything as urgent is not helping you prioritize. It is giving you a longer list of things to ignore.

Two alternatives are worth knowing. SSVC, Stakeholder-Specific Vulnerability Categorization developed by CISA and Carnegie Mellon's CERT/CC, uses a decision tree that incorporates exploitation status, technical impact, and mission impact to produce an action rather than a number: Track, Track Closely, Attend, or Act. An action is more operationally useful than a score. EPSS, the Exploit Prediction Scoring System maintained by FIRST, is a machine learning model that produces a probability score representing the likelihood that a given CVE will be exploited in the wild within the next 30 days. It is empirically derived from real-world exploitation telemetry and has been shown to dramatically outperform CVSS as a predictor of actual exploitation activity.

A mature vulnerability prioritization program combines all of these: CVSS Base for severity context, EPSS for exploitation likelihood, SSVC for decision logic, CISA's Known Exploited Vulnerabilities catalog for hard remediation triggers, and internal asset criticality data. The KEV catalog deserves special mention: if CISA has confirmed active exploitation, the organizational risk tolerance conversation is over. Patch it. The real question is not how severe a vulnerability is. It is how likely this specific vulnerability is to be exploited against your specific environment, and what the business impact would be. CVSS alone was never designed to answer that.

CTI maturity in a SOC: what it actually looks like

An immature SOC treats threat intelligence as a subscription service. It ingests one or two commercial IOC feeds, auto-blocks at the firewall, and calls it a threat intelligence program. Analysts receive alerts with no context. Triage is based on severity scores rather than adversary relevance. Detection rules are vendor-provided signatures that nobody has reviewed against actual threats facing the organization. CTI is organizationally siloed. Where it exists at all, it operates separately from detection engineering and IR with no structured handoff. Intelligence requirements are undefined, so collection is undirected and analysis never happens. The program produces no finished intelligence. Leadership receives no strategic briefings. The ATT&CK framework is on a poster on the wall but not operationalized in detections. CVSS scores drive the vulnerability program. Nobody knows who is actually targeting the organization.

A mature SOC treats CTI as a continuous operational input that touches every function. It has documented PIRs aligned to business risk that drive collection priorities. It runs a TIP such as OpenCTI or ThreatConnect, populated with structured, analyst-curated intelligence mapped to STIX 2.1. Detection engineering maintains a TTP coverage matrix against ATT&CK and knows exactly which techniques have detection coverage and which are blind spots. Every detection rule has a documented intelligence justification: a traceable link back to a threat actor, campaign, or TTP in the platform. Triage analysts receive automatic enrichment at alert time: an alert on a suspicious IP returns not just a reputation score but the associated threat actor, campaign, TLP-marked report, and related IOCs. The IR team has a standing CTI support function that activates on incident declaration. Intelligence flows bidirectionally. External intelligence informs internal detections, and internal telemetry produces new intelligence that feeds back into the platform and out to sharing communities. The vulnerability program uses EPSS and KEV alongside CVSS. Leadership receives regular strategic threat briefings tailored to the organization's sector, geography, and threat profile. The Feedback phase of the intelligence lifecycle is actually executed.

The clearest indicator of CTI maturity is whether intelligence drives decisions or merely decorates dashboards. In an immature program, the TIP is a data repository analysts occasionally consult. In a mature program, it is a living operational system an analyst touches every day because it makes their decisions better.

If you are working in CTI or building a CTI capability in a SOC, the question to ask about everything in this post is: which of these is actually happening in my organization? Not which tools are deployed. Not which frameworks appear in documentation. Which ones are being executed as a continuous operational discipline? That gap between what organizations say they do and what they actually do is where most of the risk lives.