FMP

FMP

Multi-Agent Insider Trading Monitor Using FMP Insider Trades API

Insider trading disclosures offer a structured view into how company executives, directors, and significant shareholders transact in their own stock. These filings do not explain motivation or timing, but they do provide observable, standardized records of insider buy and sell activity that can be monitored consistently across companies and time periods.

In this article, we build an insider trading monitoring system using Financial Modeling Prep as the core data layer. FMP exposes insider transaction data in a clean, structured format, making it possible to analyze insider activity without manually parsing regulatory filings or dealing with inconsistent schemas. This allows the focus to remain on interpretation and workflow design rather than data extraction.

What This Monitor Is Designed to Do

The system is implemented as a linear multi-agent pipeline. Each agent owns a narrow responsibility: retrieving insider trade records, normalizing and enriching transactions, evaluating activity patterns, and generating an explainable monitoring summary. Agents execute sequentially and communicate through a shared state, keeping the system deterministic, auditable, and easy to extend.

The objective is not to predict price movements or infer intent. Instead, the monitor highlights patterns that practitioners often track—such as clustered insider buying, repeated selling by key executives, or unusually large transactions relative to historical behavior. These signals help analysts prioritize where to look more closely using publicly disclosed information.

By the end of this article, you will have a clear blueprint for building a compact insider trading monitor grounded in FMP data and organized using a clean, production-friendly multi-agent design.

This linear, explainable agent design mirrors other monitoring workflows built on Financial Modeling Prep data, such as this multi-agent corporate fraud detector that applies deterministic signals to structured financial statements:

FMP Data Sources Behind the Article

  • Search Insider Trades API: Allows querying historical insider trading activity across companies, helping identify patterns or clusters of insider behavior that may coincide with financial anomalies.
  • Latest Insider Trade API: Provides the most recent insider trade disclosures for a company, enabling timely review of insider activity alongside newly reported financial results.

Insider Trading Data Sources from Financial Modeling Prep

The insider trading monitor is built on top of Financial Modeling Prep datasets that expose insider transaction disclosures in a structured, machine-readable form. These datasets allow insider activity to be analyzed programmatically without working directly with raw regulatory filings.

Insider Trades as Structured Records

Insider trading disclosures typically include:

  • The insider's role (officer, director, beneficial owner)
  • Transaction type (buy, sell, option exercise, etc.)
  • Transaction date
  • Number of shares transacted
  • Transaction price
  • Post-transaction ownership (when available)

From a monitoring perspective, each transaction is treated as an event. The system does not attempt to infer intent or predict outcomes. It simply observes what was disclosed and evaluates patterns across time and participants.

How FMP Insider Trades Data Fits the Pipeline

Within the multi-agent pipeline, insider trades data serves as the primary input:

  • The data ingestion agent retrieves insider transactions for a given symbol over a defined lookback window.
  • Each transaction becomes a normalized row that downstream agents can compare, group, or aggregate.
  • Consistent field names and formats make it possible to compute signals without custom parsing logic.

It's also important to note that insider filings can be amended or corrected after the initial disclosure. For this reason, monitoring systems should tolerate updates and reprocessing rather than treating individual records as final or immutable snapshots.

This structured access is what makes an agent-based design practical. Agents can focus on transformation and evaluation instead of handling inconsistent schemas or missing fields.

Why Insider Trades Are Useful for Monitoring

While insider trades are disclosed with delays and do not explain intent, they still support practical monitoring use cases when treated as structured, historical signals rather than predictive indicators.

From a monitoring perspective, insider trading disclosures are valuable because they provide standardized, comparable records of how key company insiders transact over time. When analyzed collectively, these disclosures help surface behavioral patterns that may warrant closer review, even though they do not imply causation or future price movement.

Common monitoring use cases include:

  • Identifying clusters of insider buying or selling
  • Tracking repeated activity by key executives
  • Flagging unusually large transactions relative to historical behavior
  • Observing changes in insider behavior around earnings or corporate events

By grounding the monitor in FMP's insider trading datasets, the system stays focused on verifiable, public information. With the data foundation established, the next step is to define how agents transform raw transactions into interpretable monitoring signals.

A similar disclosure-driven monitoring approach can also be applied to political trading activity, where public transaction records are used to track behavior patterns rather than predict outcomes, as shown in this congressional trading tracker built using FMP's Senate and House APIs.

Agent Responsibilities

With the data foundation in place, the insider trading monitor divides the workflow into a small set of agents. Each agent maps to a real step an analyst would take: collect disclosures, clean them into a consistent table, evaluate activity patterns, and produce a review-ready summary.

This separation keeps the implementation compact. It also makes the system easier to extend—new signals can be added without rewriting ingestion, and new summary formats can be added without changing the scoring logic.

The following sections walk through each agent in sequence, showing how raw insider trade disclosures are transformed step by step into review-ready monitoring outputs.

Data Collection Agent

Responsibility: Retrieve insider transaction records from FMP for a target symbol and time window.

What it does in the workflow:

  • Calls the relevant FMP insider trading dataset
  • Collects recent transactions (and optionally paginates if needed)
  • Returns the raw filings as a structured table for downstream processing

One workflow line: this agent converts “symbol → raw insider trade events”, which becomes the base dataset used by all later steps.

Normalization and Enrichment Agent

Responsibility: Standardize the raw transactions into a consistent, analyzable format.

Typical transformations:

  • Normalize dates into a single timezone-aware timestamp format
  • Convert shares, price, and transaction value into numeric fields
  • Standardize buy vs sell direction and transaction categories
  • Optionally enrich with derived fields such as:
    • trade_value = shares × price
    • insider role buckets (CEO/CFO/Director/10% owner)
    • rolling activity counts per insider

Derived fields such as trade_value depend on price availability and filing completeness, and may be approximate or missing for some disclosures. Downstream signals should tolerate these gaps rather than assume fully populated values.

One workflow line: this agent converts “raw trade events → normalized transaction table” that is safe for aggregation and scoring.

Signal Evaluation Agent

Responsibility: Compute rule-based monitoring signals that identify unusual or review-worthy activity.

These signals are screening heuristics, not accusations. They should remain transparent and explainable. Thresholds used in these signals should be tuned to company size, trading liquidity, and historical activity levels rather than applied uniformly across all symbols.

Examples include:

  • Cluster buying/selling: multiple insiders transacting within a short window
  • Repeated activity: the same insider appearing multiple times across recent days/weeks
  • Large trade size: transactions that exceed a threshold (absolute or relative to recent activity)
  • Role-weighted activity: trades by key executives (e.g., CEO/CFO) flagged more prominently
  • Buy/sell imbalance: unusually one-sided activity over a lookback window

Output format is structured, for example:

  • Signal name
  • Computed value(s)
  • Threshold rule
  • Fired (true/false)
  • Explanation string grounded in the numbers

One workflow line: this agent converts “normalized transactions → explainable monitoring signals.”

Summary and Alert Agent

Responsibility: Produce a compact monitoring output that an analyst can act on.

Alert labels are relative screening indicators derived from triggered signals, not absolute risk judgments or assessments of wrongdoing.

It typically:

  • Assigns a simple monitoring label (e.g., No Flags, Moderate, Needs Review) based on triggered signals
  • Summarizes what happened using deterministic facts:
    • Who traded
    • Buy vs sell direction
    • Total value
    • How many insiders
    • Over what time window
  • Lists the exact signals that triggered and why
  • Adds a “what to check next” list without making claims (e.g., review role context, compare to prior periods, check proximity to earnings)

One workflow line: this agent converts “signals → review-ready insider activity summary.”

Implementing the Insider Trading Monitor in Python

To run the examples below, you'll need your own Financial Modeling Prep API key. You can generate a key by creating an FMP account and selecting a plan that includes insider trading data access. Availability of insider trade endpoints, request limits, and historical depth may vary by plan tier, so results can differ across accounts.

You can review plans and generate an API key here.

This section walks through a compact implementation of the monitor using a fixed, linear agent sequence:

Data Collection → Normalization/Enrichment → Signal Evaluation → Summary

Each agent is a small unit with a run(state) method so you can expand it later without turning the pipeline into a large framework project.

1) Minimal FMP client helper

import requests

import pandas as pd

from typing import Any, Dict, List, Optional


BASE_V3 = "https://financialmodelingprep.com/api/v3"


def fetch_insider_trades(symbol: str, api_key: str, limit: int = 100) -> pd.DataFrame:

"""

Pulls insider trade records for a symbol from FMP and returns them as a DataFrame.


Workflow mapping:

This call returns disclosure-level insider trade events for a ticker, which become the

raw monitoring input for downstream normalization and signal checks.

"""

url = f"{BASE_V3}/insider-trading"

resp = requests.get(

url,

params={"symbol": symbol, "limit": limit, "apikey": api_key},

timeout=30

)

resp.raise_for_status()


data = resp.json()

return pd.DataFrame(data) if isinstance(data, list) else pd.DataFrame([])

What's happening:

  • params={...} keeps auth consistent and makes the call reproducible.
  • We return a DataFrame early so the rest of the pipeline works with tabular transforms.

Note: FMP's insider trading payload fields can vary by endpoint/version. In the normalization step below, we treat column names defensively instead of hard-asserting a single schema.

2) Shared state + agent skeleton

from dataclasses import dataclass

from typing import Any, Dict


State = Dict[str, Any]


@dataclass

class DataCollectionAgent:

api_key: str

limit: int = 200


def run(self, state: State) -> State:

symbol = state["symbol"]

state["raw_trades_df"] = fetch_insider_trades(

symbol,

self.api_key,

limit=self.limit

)

return state

What's happening:

  • state["symbol"] drives everything.
  • The agent stores the raw dataset under a clear key and exits.

3) Normalization and enrichment agent

import numpy as np


def _pick_first_existing(

df: pd.DataFrame,

candidates: List[str]

) -> Optional[str]:

for c in candidates:

if c in df.columns:

return c

return None



@dataclass

class NormalizeEnrichAgent:

def run(self, state: State) -> State:

df = state["raw_trades_df"].copy()

if df.empty:

state["trades_df"] = df

return state


# Defensive column picks (schema may differ by endpoint)

date_col = _pick_first_existing(

df, ["transactionDate", "filingDate", "date"]

)

shares_col = _pick_first_existing(

df, ["securitiesTransacted", "shares", "quantity"]

)

price_col = _pick_first_existing(

df, ["price", "transactionPrice"]

)

type_col = _pick_first_existing(

df, ["transactionType", "type"]

)

name_col = _pick_first_existing(

df, ["reportingName", "insiderName", "name"]

)

title_col = _pick_first_existing(

df, ["typeOfOwner", "position", "title"]

)


# Normalize date

if date_col:

df["event_date"] = pd.to_datetime(

df[date_col], errors="coerce"

)

else:

df["event_date"] = pd.NaT


# Normalize numeric fields

if shares_col:

df["shares"] = pd.to_numeric(

df[shares_col], errors="coerce"

)

else:

df["shares"] = np.nan


if price_col:

df["price"] = pd.to_numeric(

df[price_col], errors="coerce"

)

else:

df["price"] = np.nan


# Normalize labels

df["insider_name"] = (

df[name_col].astype(str) if name_col else "UNKNOWN"

)

df["insider_title"] = (

df[title_col].astype(str) if title_col else "UNKNOWN"

)

df["txn_type_raw"] = (

df[type_col].astype(str) if type_col else "UNKNOWN"

)


# Simple direction rule (kept transparent; refine later)

lowered = df["txn_type_raw"].str.lower()

df["direction"] = np.where(

lowered.str.contains("buy"),

"BUY",

np.where(lowered.str.contains("sell"), "SELL", "OTHER")

)


# Derived value

df["trade_value"] = df["shares"] * df["price"]


# Keep only what the next agent needs

keep_cols = [

"event_date",

"insider_name",

"insider_title",

"txn_type_raw",

"direction",

"shares",

"price",

"trade_value",

]


state["trades_df"] = (

df[keep_cols]

.dropna(subset=["event_date"])

.sort_values("event_date")

)

return state



Workflow mapping:

  • This agent converts raw disclosures → clean, comparable events so scoring logic stays simple.

Key design choices:

  • _pick_first_existing() avoids breaking if FMP returns slightly different field names.
  • direction stays rule-based and explainable (no hidden model behavior).
  • trade_value enables size-based signals without needing extra joins.

4) Signal evaluation agent

from datetime import timedelta

from dataclasses import dataclass


@dataclass

class SignalEvaluationAgent:

lookback_days: int = 30

min_cluster_insiders: int = 3

large_trade_value_threshold: float = 500_000 # tune per your policy


def run(self, state: State) -> State:

df = state["trades_df"]

signals = []


if df.empty:

state["signals"] = signals

return state


end_dt = df["event_date"].max()

start_dt = end_dt - timedelta(days=self.lookback_days)

recent = df[df["event_date"].between(start_dt, end_dt)].copy()


# Signal 1: cluster activity (unique insiders)

unique_insiders = recent["insider_name"].nunique()

fired_cluster = unique_insiders >= self.min_cluster_insiders

signals.append({

"name": "ClusterActivity",

"value": int(unique_insiders),

"fired": fired_cluster,

"why": (

f"{unique_insiders} unique insiders traded "

f"in the last {self.lookback_days} days."

)

})


# Signal 2: large trade value

max_val = (

recent["trade_value"].dropna().max()

if "trade_value" in recent else None

)

fired_large = (

(max_val is not None)

and (max_val >= self.large_trade_value_threshold)

)

signals.append({

"name": "LargeTransaction",

"value": None if max_val is None else float(max_val),

"fired": fired_large,

"why": (

"Flags when a recent transaction exceeds "

"the configured value threshold."

)

})


# Signal 3: buy/sell imbalance

buys = int((recent["direction"] == "BUY").sum())

sells = int((recent["direction"] == "SELL").sum())

imbalance = buys - sells

fired_imbalance = abs(imbalance) >= 5 # simple heuristic

signals.append({

"name": "BuySellImbalance",

"value": {

"buys": buys,

"sells": sells,

"delta": imbalance

},

"fired": fired_imbalance,

"why": (

"Flags strongly one-sided insider activity "

"over the monitoring window."

)

})


state["signals"] = signals

state["recent_window"] = {

"start": str(start_dt.date()),

"end": str(end_dt.date())

}

return state



Workflow mapping:

  • Converts normalized events → explainable signals (each with value + threshold + reason).

5) Summary and alert agent

from dataclasses import dataclass


@dataclass

class SummaryAlertAgent:

def run(self, state: State) -> State:

symbol = state["symbol"]

signals = state.get("signals", [])

df = state.get("trades_df", pd.DataFrame())


fired = [s for s in signals if s.get("fired")]

score = len(fired)


if score >= 2:

label = "Needs Review"

elif score == 1:

label = "Moderate"

else:

label = "No Flags"


# compact facts

recent = df.tail(10) if not df.empty else df

snapshot = []

for _, r in recent.iterrows():

snapshot.append({

"date": str(r["event_date"].date()),

"insider": r["insider_name"],

"title": r["insider_title"],

"direction": r["direction"],

"trade_value": (

None if pd.isna(r["trade_value"])

else float(r["trade_value"])

),

})


state["alert"] = {

"symbol": symbol,

"label": label,

"signals_fired": score,

"signals": fired,

"recent_trades_sample": snapshot,

"note": (

"This output is a monitoring summary based on disclosed "

"insider transactions, not a determination of wrongdoing."

),

}

return state



6) Fixed linear runner

def run_insider_monitor(symbol: str, api_key: str) -> Dict[str, Any]:

state: State = {"symbol": symbol}


agents = [

DataCollectionAgent(api_key=api_key),

NormalizeEnrichAgent(),

SignalEvaluationAgent(),

SummaryAlertAgent(),

]


for agent in agents:

state = agent.run(state)


return state["alert"]



This keeps routing explicit and makes the monitor easy to extend: you can insert a new agent without rewriting the pipeline.

Output Design: Explainable Insider Activity Alerts

A monitoring system is only useful if its output is easy to review and easy to act on. For insider activity, the most practical output format is an alert object that combines two layers:

  1. Deterministic facts pulled directly from FMP insider trade records
  2. Explainable signals that describe why the activity was flagged for review

This keeps the monitor transparent. Readers can always trace a “Needs Review” label back to specific disclosed transactions and explicit rules.

Recommended Alert Structure

A compact alert format that works well in real workflows looks like this:

  • symbol: which ticker is being monitored
  • label: screening label derived from triggered signals (e.g., No Flags / Moderate / Needs Review)
  • recent_window: the time window used for evaluation
  • signals: list of fired signals with computed values and plain-language reasons
  • recent_trades_sample: small, human-readable sample of recent insider transactions
  • note: a clear disclaimer that this is monitoring, not a fraud/illegal-trading determination

This structure mirrors what you'd want in a dashboard, a Slack alert, or an email digest.

Output (JSON)

Below is an example of what the monitor can return after running the fixed agent sequence. Field names may differ depending on the exact FMP payload you receive, but the idea stays stable: facts + signals + explanation.

{

"symbol": "AMD",

"label": "Needs Review",

"signals_fired": 2,

"signals": [

{

"name": "ClusterActivity",

"value": 4,

"fired": true,

"why": "4 unique insiders traded in the last 30 days."

},

{

"name": "LargeTransaction",

"value": 780000.0,

"fired": true,

"why": "Flags when a recent transaction exceeds the configured value threshold."

}

],

"recent_trades_sample": [

{

"date": "2025-12-02",

"insider": "Jane Doe",

"title": "Director",

"direction": "BUY",

"trade_value": 120000.0

},

{

"date": "2025-12-05",

"insider": "John Smith",

"title": "Officer",

"direction": "SELL",

"trade_value": 780000.0

}

],

"note": "This output is a monitoring summary based on disclosed insider transactions, not a determination of wrongdoing."

}

What's happening here:

  • The alert label is driven purely by how many signals fired, not by subjective interpretation.
  • Signals carry the computed value and the reason so readers can audit the logic.
  • The trade sample keeps the output reviewable without dumping the full dataset.

Final Words: Monitoring Insider Activity with a Structured Agent Pipeline

This article presented a compact blueprint for monitoring insider trading activity using publicly disclosed data and a linear multi-agent design. The workflow remains intentionally simple: retrieve insider transactions from Financial Modeling Prep, normalize them into a consistent structure, evaluate transparent rule-based signals, and summarize the results in a review-friendly format.

The strength of this approach lies in its separation of concerns. Data access stays isolated from analysis logic, signal definitions remain explicit and explainable, and summaries are grounded in verifiable disclosures rather than assumptions about intent or market impact. This makes the system suitable for screening and prioritization tasks where clarity and auditability matter.

If you extend this monitor further, the most practical directions are:

  • Running the pipeline on a watchlist instead of a single symbol
  • Tracking how insider activity patterns evolve across rolling time windows
  • Adding role-weighted or value-weighted signals using the same agent boundaries
  • Integrating the monitor into scheduled jobs or dashboards for ongoing surveillance

The core idea stays stable: Financial Modeling Prep provides the structured insider trading data, and the multi-agent design keeps the monitoring logic modular, readable, and easy to evolve as your analytical needs grow.