Detection Methodology
This page explains how results are generated, what can go wrong, and how to validate findings before drawing conclusions.
Last updated: March 27, 2026
1) Data source and scope
Each check is driven by platform-specific URL templates and matching rules in the maintained site dataset. The scanner focuses on publicly reachable profile surfaces and does not authenticate into private areas.
2) Detection pipeline
- Validate username format and prepare the target request set.
- Build site-specific URLs using platform rule templates.
- Run concurrent HTTP checks with timeout controls.
- Evaluate status code and content-match rules per platform.
- Stream matched results in real time and expose source links.
3) Safe mode policy
By default, adult-category platforms are excluded from scan coverage on this website. This improves safety posture and keeps the tool aligned with general audience and ad policy expectations.
4) Known limitations and false signals
Platform behavior changes frequently. Some sites return generic responses, require JavaScript rendering, or rate limit requests. This can produce false positives or false negatives. Redirect behavior and anti-bot defenses may also affect availability checks.
5) Verification checklist
- Open profile URLs manually and inspect profile context.
- Cross-check at least 3 independent platform signals.
- Compare account creation clues and posting cadence.
- Document uncertainty levels in your case notes.
- Never assert identity from one username match alone.
6) Feedback and rule updates
Detection rules are periodically revised when platform behavior changes. If you observe an incorrect result, report it with username, platform, and expected behavior so the rule can be triaged and updated quickly.