Scammers Are Hijacking Websites to “Poison” ChatGPT with Spammy Recommendations

Cybercriminals and scammers have begun exploiting vulnerabilities in ChatGPT’s source validation to inject spam — particularly gambling links — into its AI-generated answers. They’re doing this by taking over hacked sites and expired domains that continue to carry “trusted” reputations even after malicious content is added TechRadar+1TechRadar+1.


🔍 How It’s Done

1. Hijacked Websites

Legitimate websites—such as a California legal practice or a UN youth coalition site—have been compromised. Hackers inserted gambling-themed content hidden via techniques like white-on-white text. ChatGPT then surfaced those pages when users asked about online casinos Digitaloft+6TechRadar+6Search Engine Journal+6.

2. Repurposed Expired Domains

Malicious actors have purchased expired domains that once belonged to reputable organizations (e.g., arts charities, summer camps) with strong backlink histories. They’re now using these domains to host spammy content. ChatGPT still treats the domains as authoritative due to their legacy reputation DigitrendZ+6TechRadar+6Search Engine Journal+6.


✅ Why It Works

  • Outdated trust signals: ChatGPT’s algorithms still rely heavily on domain age, backlink count, and apparent recency—even when the ownership and content have drastically changed TechRadarSearch Engine Journal.
  • Lack of real-time vetting: The model doesn’t verify whether a domain’s current content matches its historical identity or legitimacy TechRadar+6TechRadar+6Digitaloft+6.

🛡️ How You Can Stay Safe

TipWhat’s Behind It
Question the sourcesAlways check where recommendations come from—don’t assume every domain is credible
Verify current ownershipUse tools like WHOIS or archive sites to confirm if a domain’s history aligns with its present content
Cross-check choicesLook for confirmation from trusted sources, especially when dealing with gambling, legal, or health advice

✏️ Bottom Line

AI tools like ChatGPT can unintentionally echo spam inserted via hacked or repurposed sites. Until models incorporate more real-time domain and content validity checks, users should remain skeptical and verify sources before acting on AI-generated recommendations.