FMP
Jan 06, 2026
Insider trading disclosures offer a structured view into how company executives, directors, and significant shareholders transact in their own stock. These filings do not explain motivation or timing, but they do provide observable, standardized records of insider buy and sell activity that can be monitored consistently across companies and time periods.
In this article, we build an insider trading monitoring system using Financial Modeling Prep as the core data layer. FMP exposes insider transaction data in a clean, structured format, making it possible to analyze insider activity without manually parsing regulatory filings or dealing with inconsistent schemas. This allows the focus to remain on interpretation and workflow design rather than data extraction.
The system is implemented as a linear multi-agent pipeline. Each agent owns a narrow responsibility: retrieving insider trade records, normalizing and enriching transactions, evaluating activity patterns, and generating an explainable monitoring summary. Agents execute sequentially and communicate through a shared state, keeping the system deterministic, auditable, and easy to extend.
The objective is not to predict price movements or infer intent. Instead, the monitor highlights patterns that practitioners often track—such as clustered insider buying, repeated selling by key executives, or unusually large transactions relative to historical behavior. These signals help analysts prioritize where to look more closely using publicly disclosed information.
By the end of this article, you will have a clear blueprint for building a compact insider trading monitor grounded in FMP data and organized using a clean, production-friendly multi-agent design.
This linear, explainable agent design mirrors other monitoring workflows built on Financial Modeling Prep data, such as this multi-agent corporate fraud detector that applies deterministic signals to structured financial statements:
The insider trading monitor is built on top of Financial Modeling Prep datasets that expose insider transaction disclosures in a structured, machine-readable form. These datasets allow insider activity to be analyzed programmatically without working directly with raw regulatory filings.
Insider trading disclosures typically include:
From a monitoring perspective, each transaction is treated as an event. The system does not attempt to infer intent or predict outcomes. It simply observes what was disclosed and evaluates patterns across time and participants.
Within the multi-agent pipeline, insider trades data serves as the primary input:
It's also important to note that insider filings can be amended or corrected after the initial disclosure. For this reason, monitoring systems should tolerate updates and reprocessing rather than treating individual records as final or immutable snapshots.
This structured access is what makes an agent-based design practical. Agents can focus on transformation and evaluation instead of handling inconsistent schemas or missing fields.
While insider trades are disclosed with delays and do not explain intent, they still support practical monitoring use cases when treated as structured, historical signals rather than predictive indicators.
From a monitoring perspective, insider trading disclosures are valuable because they provide standardized, comparable records of how key company insiders transact over time. When analyzed collectively, these disclosures help surface behavioral patterns that may warrant closer review, even though they do not imply causation or future price movement.
Common monitoring use cases include:
By grounding the monitor in FMP's insider trading datasets, the system stays focused on verifiable, public information. With the data foundation established, the next step is to define how agents transform raw transactions into interpretable monitoring signals.
A similar disclosure-driven monitoring approach can also be applied to political trading activity, where public transaction records are used to track behavior patterns rather than predict outcomes, as shown in this congressional trading tracker built using FMP's Senate and House APIs.
With the data foundation in place, the insider trading monitor divides the workflow into a small set of agents. Each agent maps to a real step an analyst would take: collect disclosures, clean them into a consistent table, evaluate activity patterns, and produce a review-ready summary.
This separation keeps the implementation compact. It also makes the system easier to extend—new signals can be added without rewriting ingestion, and new summary formats can be added without changing the scoring logic.
The following sections walk through each agent in sequence, showing how raw insider trade disclosures are transformed step by step into review-ready monitoring outputs.
Responsibility: Retrieve insider transaction records from FMP for a target symbol and time window.
What it does in the workflow:
One workflow line: this agent converts “symbol → raw insider trade events”, which becomes the base dataset used by all later steps.
Responsibility: Standardize the raw transactions into a consistent, analyzable format.
Typical transformations:
Derived fields such as trade_value depend on price availability and filing completeness, and may be approximate or missing for some disclosures. Downstream signals should tolerate these gaps rather than assume fully populated values.
One workflow line: this agent converts “raw trade events → normalized transaction table” that is safe for aggregation and scoring.
Responsibility: Compute rule-based monitoring signals that identify unusual or review-worthy activity.
These signals are screening heuristics, not accusations. They should remain transparent and explainable. Thresholds used in these signals should be tuned to company size, trading liquidity, and historical activity levels rather than applied uniformly across all symbols.
Examples include:
Output format is structured, for example:
One workflow line: this agent converts “normalized transactions → explainable monitoring signals.”
Responsibility: Produce a compact monitoring output that an analyst can act on.
Alert labels are relative screening indicators derived from triggered signals, not absolute risk judgments or assessments of wrongdoing.
It typically:
One workflow line: this agent converts “signals → review-ready insider activity summary.”
To run the examples below, you'll need your own Financial Modeling Prep API key. You can generate a key by creating an FMP account and selecting a plan that includes insider trading data access. Availability of insider trade endpoints, request limits, and historical depth may vary by plan tier, so results can differ across accounts.
You can review plans and generate an API key here.
This section walks through a compact implementation of the monitor using a fixed, linear agent sequence:
Data Collection → Normalization/Enrichment → Signal Evaluation → Summary
Each agent is a small unit with a run(state) method so you can expand it later without turning the pipeline into a large framework project.
|
import requests import pandas as pd from typing import Any, Dict, List, Optional BASE_V3 = "https://financialmodelingprep.com/api/v3" def fetch_insider_trades(symbol: str, api_key: str, limit: int = 100) -> pd.DataFrame: """ Pulls insider trade records for a symbol from FMP and returns them as a DataFrame. Workflow mapping: This call returns disclosure-level insider trade events for a ticker, which become the raw monitoring input for downstream normalization and signal checks. """ url = f"{BASE_V3}/insider-trading" resp = requests.get( url, params={"symbol": symbol, "limit": limit, "apikey": api_key}, timeout=30 ) resp.raise_for_status() data = resp.json() return pd.DataFrame(data) if isinstance(data, list) else pd.DataFrame([]) |
What's happening:
Note: FMP's insider trading payload fields can vary by endpoint/version. In the normalization step below, we treat column names defensively instead of hard-asserting a single schema.
|
from dataclasses import dataclass from typing import Any, Dict State = Dict[str, Any] @dataclass class DataCollectionAgent: api_key: str limit: int = 200 def run(self, state: State) -> State: symbol = state["symbol"] state["raw_trades_df"] = fetch_insider_trades( symbol, self.api_key, limit=self.limit ) return state |
What's happening:
|
import numpy as np def _pick_first_existing( df: pd.DataFrame, candidates: List[str] ) -> Optional[str]: for c in candidates: if c in df.columns: return c return None @dataclass class NormalizeEnrichAgent: def run(self, state: State) -> State: df = state["raw_trades_df"].copy() if df.empty: state["trades_df"] = df return state # Defensive column picks (schema may differ by endpoint) date_col = _pick_first_existing( df, ["transactionDate", "filingDate", "date"] ) shares_col = _pick_first_existing( df, ["securitiesTransacted", "shares", "quantity"] ) price_col = _pick_first_existing( df, ["price", "transactionPrice"] ) type_col = _pick_first_existing( df, ["transactionType", "type"] ) name_col = _pick_first_existing( df, ["reportingName", "insiderName", "name"] ) title_col = _pick_first_existing( df, ["typeOfOwner", "position", "title"] ) # Normalize date if date_col: df["event_date"] = pd.to_datetime( df[date_col], errors="coerce" ) else: df["event_date"] = pd.NaT # Normalize numeric fields if shares_col: df["shares"] = pd.to_numeric( df[shares_col], errors="coerce" ) else: df["shares"] = np.nan if price_col: df["price"] = pd.to_numeric( df[price_col], errors="coerce" ) else: df["price"] = np.nan # Normalize labels df["insider_name"] = ( df[name_col].astype(str) if name_col else "UNKNOWN" ) df["insider_title"] = ( df[title_col].astype(str) if title_col else "UNKNOWN" ) df["txn_type_raw"] = ( df[type_col].astype(str) if type_col else "UNKNOWN" ) # Simple direction rule (kept transparent; refine later) lowered = df["txn_type_raw"].str.lower() df["direction"] = np.where( lowered.str.contains("buy"), "BUY", np.where(lowered.str.contains("sell"), "SELL", "OTHER") ) # Derived value df["trade_value"] = df["shares"] * df["price"] # Keep only what the next agent needs keep_cols = [ "event_date", "insider_name", "insider_title", "txn_type_raw", "direction", "shares", "price", "trade_value", ] state["trades_df"] = ( df[keep_cols] .dropna(subset=["event_date"]) .sort_values("event_date") ) return state |
Workflow mapping:
Key design choices:
|
from datetime import timedelta from dataclasses import dataclass @dataclass class SignalEvaluationAgent: lookback_days: int = 30 min_cluster_insiders: int = 3 large_trade_value_threshold: float = 500_000 # tune per your policy def run(self, state: State) -> State: df = state["trades_df"] signals = [] if df.empty: state["signals"] = signals return state end_dt = df["event_date"].max() start_dt = end_dt - timedelta(days=self.lookback_days) recent = df[df["event_date"].between(start_dt, end_dt)].copy() # Signal 1: cluster activity (unique insiders) unique_insiders = recent["insider_name"].nunique() fired_cluster = unique_insiders >= self.min_cluster_insiders signals.append({ "name": "ClusterActivity", "value": int(unique_insiders), "fired": fired_cluster, "why": ( f"{unique_insiders} unique insiders traded " f"in the last {self.lookback_days} days." ) }) # Signal 2: large trade value max_val = ( recent["trade_value"].dropna().max() if "trade_value" in recent else None ) fired_large = ( (max_val is not None) and (max_val >= self.large_trade_value_threshold) ) signals.append({ "name": "LargeTransaction", "value": None if max_val is None else float(max_val), "fired": fired_large, "why": ( "Flags when a recent transaction exceeds " "the configured value threshold." ) }) # Signal 3: buy/sell imbalance buys = int((recent["direction"] == "BUY").sum()) sells = int((recent["direction"] == "SELL").sum()) imbalance = buys - sells fired_imbalance = abs(imbalance) >= 5 # simple heuristic signals.append({ "name": "BuySellImbalance", "value": { "buys": buys, "sells": sells, "delta": imbalance }, "fired": fired_imbalance, "why": ( "Flags strongly one-sided insider activity " "over the monitoring window." ) }) state["signals"] = signals state["recent_window"] = { "start": str(start_dt.date()), "end": str(end_dt.date()) } return state |
Workflow mapping:
|
from dataclasses import dataclass @dataclass class SummaryAlertAgent: def run(self, state: State) -> State: symbol = state["symbol"] signals = state.get("signals", []) df = state.get("trades_df", pd.DataFrame()) fired = [s for s in signals if s.get("fired")] score = len(fired) if score >= 2: label = "Needs Review" elif score == 1: label = "Moderate" else: label = "No Flags" # compact facts recent = df.tail(10) if not df.empty else df snapshot = [] for _, r in recent.iterrows(): snapshot.append({ "date": str(r["event_date"].date()), "insider": r["insider_name"], "title": r["insider_title"], "direction": r["direction"], "trade_value": ( None if pd.isna(r["trade_value"]) else float(r["trade_value"]) ), }) state["alert"] = { "symbol": symbol, "label": label, "signals_fired": score, "signals": fired, "recent_trades_sample": snapshot, "note": ( "This output is a monitoring summary based on disclosed " "insider transactions, not a determination of wrongdoing." ), } return state |
|
def run_insider_monitor(symbol: str, api_key: str) -> Dict[str, Any]: state: State = {"symbol": symbol} agents = [ DataCollectionAgent(api_key=api_key), NormalizeEnrichAgent(), SignalEvaluationAgent(), SummaryAlertAgent(), ] for agent in agents: state = agent.run(state) return state["alert"] |
This keeps routing explicit and makes the monitor easy to extend: you can insert a new agent without rewriting the pipeline.
A monitoring system is only useful if its output is easy to review and easy to act on. For insider activity, the most practical output format is an alert object that combines two layers:
This keeps the monitor transparent. Readers can always trace a “Needs Review” label back to specific disclosed transactions and explicit rules.
A compact alert format that works well in real workflows looks like this:
This structure mirrors what you'd want in a dashboard, a Slack alert, or an email digest.
Below is an example of what the monitor can return after running the fixed agent sequence. Field names may differ depending on the exact FMP payload you receive, but the idea stays stable: facts + signals + explanation.
|
{ "symbol": "AMD", "label": "Needs Review", "signals_fired": 2, "signals": [ { "name": "ClusterActivity", "value": 4, "fired": true, "why": "4 unique insiders traded in the last 30 days." }, { "name": "LargeTransaction", "value": 780000.0, "fired": true, "why": "Flags when a recent transaction exceeds the configured value threshold." } ], "recent_trades_sample": [ { "date": "2025-12-02", "insider": "Jane Doe", "title": "Director", "direction": "BUY", "trade_value": 120000.0 }, { "date": "2025-12-05", "insider": "John Smith", "title": "Officer", "direction": "SELL", "trade_value": 780000.0 } ], "note": "This output is a monitoring summary based on disclosed insider transactions, not a determination of wrongdoing." } |
What's happening here:
This article presented a compact blueprint for monitoring insider trading activity using publicly disclosed data and a linear multi-agent design. The workflow remains intentionally simple: retrieve insider transactions from Financial Modeling Prep, normalize them into a consistent structure, evaluate transparent rule-based signals, and summarize the results in a review-friendly format.
The strength of this approach lies in its separation of concerns. Data access stays isolated from analysis logic, signal definitions remain explicit and explainable, and summaries are grounded in verifiable disclosures rather than assumptions about intent or market impact. This makes the system suitable for screening and prioritization tasks where clarity and auditability matter.
If you extend this monitor further, the most practical directions are:
The core idea stays stable: Financial Modeling Prep provides the structured insider trading data, and the multi-agent design keeps the monitoring logic modular, readable, and easy to evolve as your analytical needs grow.
Introduction In corporate finance, assessing how effectively a company utilizes its capital is crucial. Two key metri...
Bank of America analysts reiterated a bullish outlook on data center and artificial intelligence capital expenditures fo...
Pinduoduo Inc., listed on the NASDAQ as PDD, is a prominent e-commerce platform in China, also operating internationally...