Skip to main content
TutorialsJanuary 25, 20269 min readUpdated February 24, 2026

How to Investigate Disinformation Campaigns Using AI and OSINT

The tweet went viral in minutes. A shocking claim about election fraud, complete with a grainy video and urgent language.

TL;DR

The tweet went viral in minutes. A shocking claim about election fraud, complete with a grainy video and urgent language. Within hours, it had been shared 50,000 times. But something didn't add up—the accounts amplifying it shared suspiciously similar posting patterns, and many were created within days of each other. Was this organic outrage or a coordinated disinformation campaign?

How to Investigate Disinformation Campaigns Using AI and OSINT

How to Investigate Disinformation Campaigns Using AI and OSINT

The tweet went viral in minutes. A shocking claim about election fraud, complete with a grainy video and urgent language. Within hours, it had been shared 50,000 times. But something didn't add up—the accounts amplifying it shared suspiciously similar posting patterns, and many were created within days of each other. Was this organic outrage or a coordinated disinformation campaign?

For security researchers and investigative journalists, distinguishing authentic grassroots movements from manufactured consensus has become one of the defining challenges of our time. Disinformation campaigns have grown sophisticated, employing networks of automated accounts, human troll farms, and carefully crafted narratives designed to exploit algorithmic amplification.

This tutorial provides a practical framework for investigating disinformation campaigns using AI-powered OSINT tools, walking through the methodology for detecting coordinated inauthentic behavior, tracking narrative spread, and building evidence packages that can withstand scrutiny.

Understanding the Anatomy of Disinformation Campaigns

Before diving into investigation techniques, it's essential to understand how modern disinformation operations function. They rarely consist of a single bad actor—instead, they operate as ecosystems with distinct components.

The Amplification Network

Most campaigns rely on layers of accounts working in concert:

  • Seed accounts that introduce narratives into the information ecosystem
  • Amplifier accounts (often automated or semi-automated) that boost content through retweets, shares, and engagement
  • Legitimizer accounts that add credibility through commentary and apparent organic discussion
  • Bridge accounts that help content cross from fringe communities into mainstream discourse

Behavioral Signatures

Coordinated campaigns leave traces. Key indicators include:

  • Temporal clustering (many accounts posting similar content within narrow time windows)
  • Network clustering (accounts following each other at rates inconsistent with organic growth)
  • Content homogeneity (near-identical phrasing, hashtags, or talking points)
  • Account creation patterns (clusters of accounts created around the same dates)
  • Abnormal engagement ratios (posts receiving engagement primarily from suspicious accounts)

Phase 1: Initial Detection and Scoping

Every disinformation investigation begins with a signal—something that suggests coordinated activity rather than organic discourse.

Identifying Suspicious Patterns

Start by monitoring for anomalies in how content spreads. When you encounter potentially coordinated activity, your first task is establishing scope:

Volume Analysis: How much activity exists around this narrative? Use keyword counting to establish baseline metrics:

Tool: countTweets
Parameters:
  phrase: "election fraud evidence"
  startDate: "2025-11-01"
  endDate: "2025-11-15"

Sudden spikes in volume—particularly around specific events or outside normal news cycles—warrant deeper investigation.

Initial Account Sampling: Identify accounts actively spreading the narrative by searching for users authoring relevant content:

Tool: getTwitterUsersByKeywords
Parameters:
  query: "\"election fraud\" OR \"stolen election\""
  fields: ["username", "createdAt", "followersCount", "followingCount", 
           "isInauthentic", "isInauthenticProbScore", "avgTweetsPerDayLastMonth"]

The isInauthentic and isInauthenticProbScore fields provide AI-powered assessments of account authenticity, giving you an immediate filter for identifying potential bot or coordinated accounts.

Building Your Initial Target List

From your initial sampling, build a list of accounts exhibiting suspicious characteristics:

  • High inauthenticity probability scores
  • Recent account creation dates
  • Abnormal posting frequencies
  • Follower/following ratios inconsistent with account age
  • Generic or template-like profile descriptions

Phase 2: Network Mapping and Coordination Detection

With target accounts identified, the next phase involves mapping relationships to identify coordination patterns.

Analyzing Follower Networks

Coordinated accounts often follow each other at unusually high rates. Map the network structure:

Tool: getTwitterUserConnections
Parameters:
  username: "[suspicious_account]"
  connectionType: "followers"
  fields: ["id", "username", "createdAt", "isInauthentic", 
           "isInauthenticProbScore", "followersCount"]

Look for:

  • Network density: Do the suspicious accounts you've identified follow each other at rates higher than random chance?
  • Creation clustering: Were follower accounts created in temporal bursts?
  • Authenticity clustering: Do followers show elevated inauthenticity scores?

Engagement Pattern Analysis

Examine who amplifies suspicious content. For a specific viral post:

Tool: getTwitterPostInteractingUsers
Parameters:
  postId: "[viral_post_id]"
  interactionType: "retweeters"
  fields: ["username", "createdAt", "followersCount", "isInauthentic", 
           "isInauthenticProbScore", "avgTweetsPerDayLastMonth"]

Coordinated amplification often reveals itself through:

  • Temporal clustering: Retweets arriving in unnatural bursts
  • Account age patterns: Disproportionate engagement from recently created accounts
  • Cross-account coordination: The same accounts repeatedly amplifying content from the network

Documenting Coordination Evidence

For each suspected coordinated cluster, document:

  1. Account creation date distributions
  2. Network connection overlaps
  3. Posting time correlations
  4. Content similarity metrics
  5. Authenticity score distributions

Export complete datasets for statistical analysis:

Tool: getTwitterUsersByKeywords
Parameters:
  query: "[campaign_keywords]"
  exportToCsv: true

The CSV export provides the granular data needed for statistical validation of coordination hypotheses.

Phase 3: Narrative Tracking and Content Analysis

Understanding what narratives are being pushed—and how they evolve—is central to disinformation investigation.

Tracking Narrative Evolution

Monitor how specific claims spread and mutate:

Tool: getTwitterPostsByKeywords
Parameters:
  query: "\"dominion voting\" OR \"voting machines\""
  fields: ["id", "text", "authorUsername", "createdAtDate", 
           "retweetCount", "quoteCount", "impressionCount"]
  startDate: "2025-11-03"
  endDate: "2025-11-10"

Track:

  • Narrative seeding: Which accounts first introduced specific claims?
  • Amplification peaks: When did narratives achieve maximum spread?
  • Mutation patterns: How do claims evolve as they spread?
  • Platform jumping: Do narratives move between Twitter, Instagram, and other platforms?

Cross-Platform Analysis

Sophisticated campaigns operate across multiple platforms. Compare activity patterns:

Twitter Analysis:

Tool: getTwitterPostsByKeywords
Parameters:
  query: "[narrative_keywords]"
  startDate: "2025-11-01"
  endDate: "2025-11-15"

Instagram Analysis:

Tool: getInstagramPostsByKeywords
Parameters:
  query: "[narrative_keywords]"
  startDate: "2025-11-01"
  endDate: "2025-11-15"

Cross-platform coordination—the same narratives appearing simultaneously across platforms, often with suspiciously similar phrasing—provides strong evidence of organized campaigns.

Identifying Seed Accounts

Finding the origin points of narratives helps identify the orchestrators:

Tool: getTwitterPostsByKeywords
Parameters:
  query: "\"specific claim in quotes\""
  fields: ["id", "text", "authorUsername", "createdAtDate"]
  sortBy: "createdAt"
  sortOrder: "asc"

The earliest posts containing specific narrative elements often lead back to seed accounts or the original sources of manufactured content.

Phase 4: Attribution and Evidence Collection

Building a defensible case requires systematic evidence collection and careful attribution analysis.

Deep Profile Analysis

For suspected key operators, conduct thorough profile analysis:

Tool: getTwitterUser
Parameters:
  identifier: "[suspected_operator]"
  identifierType: "username"
  fields: ["id", "username", "name", "description", "createdAt",
           "followersCount", "followingCount", "tweetCount",
           "usernameChanges", "lastUsernameChangeDatetime",
           "isInauthentic", "isInauthenticProbScore", "inauthenticType",
           "accountBasedIn", "locationAccurate"]

Key attribution indicators:

  • Username history: Accounts that have changed usernames may have pivoted from other campaigns
  • Location data: Discrepancies between claimed and detected locations
  • Account age vs. activity: Dormant accounts suddenly activated for campaigns
  • Inauthenticity classification: The specific type of inauthentic behavior detected

Building Evidence Packages

Document your findings systematically:

  1. Account evidence: Profiles, creation dates, network connections, authenticity scores
  2. Content evidence: Screenshots, post IDs, timestamps, engagement metrics
  3. Network evidence: Connection maps, follower overlaps, engagement patterns
  4. Temporal evidence: Timeline reconstructions showing coordination
  5. Statistical evidence: Quantitative analysis demonstrating non-organic patterns

Use CSV exports to preserve complete datasets:

Tool: getTwitterPostsByAuthor
Parameters:
  username: "[target_account]"
  exportToCsv: true
  fields: ["id", "text", "createdAtDate", "retweetCount", "likeCount"]

Practical Investigation Workflow

Here's a complete workflow for investigating a suspected disinformation campaign:

Step 1: Signal Detection

Identify the potential campaign through anomalous volume, suspicious accounts, or coordinated messaging. Establish initial keywords and timeframes.

Step 2: Scope Assessment

Quantify the scale of the campaign:

  • Total post volume around key narratives
  • Number of unique accounts involved
  • Timeframe of primary activity
  • Platforms affected

Step 3: Account Profiling

Build a database of suspicious accounts with:

  • Authenticity scores and classifications
  • Creation dates and account ages
  • Posting frequencies and patterns
  • Network connections

Step 4: Network Analysis

Map relationships between suspicious accounts:

  • Follower/following overlaps
  • Engagement patterns (who amplifies whom)
  • Creation date clustering
  • Content sharing patterns

Step 5: Narrative Tracing

Track the spread of specific claims:

  • Identify seed accounts and first appearances
  • Document mutation and evolution
  • Map cross-platform spread
  • Identify key amplification nodes

Step 6: Evidence Compilation

Package findings with:

  • Complete CSV exports of relevant data
  • Timeline reconstructions
  • Network visualizations
  • Statistical analysis of coordination patterns

Ethical Considerations

Disinformation investigation carries significant ethical responsibilities:

Accuracy: False attribution can harm innocent parties. Require strong evidence before making claims about coordination.

Context: Not all coordinated activity is malicious. Distinguish between grassroots organizing, marketing campaigns, and hostile information operations.

Proportionality: Focus investigative resources on campaigns causing genuine harm rather than mere political disagreement.

Transparency: Document methodology clearly so findings can be verified or challenged.

Safety: Be aware that investigating active operations may expose you to retaliation. Practice operational security.

Key Takeaways

  • Pattern recognition is essential: Disinformation campaigns leave behavioral signatures in temporal patterns, network structures, and engagement metrics that distinguish them from organic activity.

  • AI-powered authenticity scoring accelerates detection: Inauthenticity probability scores provide immediate filtering to identify likely coordinated accounts, dramatically reducing manual review time.

  • Cross-platform analysis reveals sophistication: Modern campaigns operate across multiple platforms; investigating only one gives an incomplete picture of the operation.

  • Documentation standards matter: Rigorous evidence collection—including preserved datasets, timestamps, and statistical analysis—separates credible investigation from speculation.

  • Network analysis exposes coordination: The relationships between accounts often reveal coordination more clearly than content analysis alone.

Conclusion

Investigating disinformation campaigns requires a systematic approach combining technical analysis, behavioral understanding, and rigorous documentation. The tools and methodologies outlined here provide a framework for security researchers and investigative journalists to detect coordinated inauthentic behavior, trace narrative spread, and build evidence packages that can withstand scrutiny.

As disinformation techniques evolve, so must investigation capabilities. AI-powered analysis tools that can assess authenticity at scale, map network relationships, and track cross-platform spread have become essential for keeping pace with sophisticated information operations.

The stakes are significant. Disinformation campaigns undermine public discourse, manipulate democratic processes, and erode trust in institutions. Rigorous, ethical investigation that exposes these operations serves as a crucial defense mechanism for information integrity.

Start with a single suspicious signal. Follow the network connections. Document everything. The patterns will emerge.

Share this article

Ready to Get Started?

Start building AI-powered social intelligence workflows today. No API keys required.