Yes.
Filtering models
AI-powered filtering models automatically remove irrelevant content from the scraping pipeline and platform. This means we can cast a wider net with broader or generic keywords (e.g., “cream,” “bag,” or “sneakers”) without being flooded by unrelated results. For example, if you search “cream”, AI will filter out results about dessert recipes and only keep listings that relate to cosmetics or skincare products.
Deep semantic detection
AI doesn’t just match keywords, it understands context. Deep semantic detection allows the system to navigate the internet and highlight content that is most likely infringing, even if sellers avoid obvious keywords. For instance:
A counterfeit handbag listed as “luxury leather tote” without brand names can still be flagged because the imagery, style, and descriptors match patterns of known infringements.
Tutorials or files titled “design toolkit” might be identified as unauthorized versions of branded training material, even though the brand name isn’t mentioned.
Image matching
AI also compares visual features—such as shapes, colors, and packaging details—to group identical or near-identical product images. This ensures that large volumes of duplicate or lookalike listings are discovered together. For instance:
A seller uploads dozens of fake sneaker listings with slightly cropped or resized images—image matching clusters them into one group, making them easier to flag, label (infringing or not) and review.
Together
Filtering, semantic detection, and image matching work in tandem: filtering removes noise, semantic detection uncovers hidden risks in text, and image matching captures visual duplicates at scale—ensuring broader, smarter, and more efficient discovery of infringing content.
AI-driven grouping of visually similar content
Image matching uses AI models to analyze and compare visual features (such as shape, colour patterns, textures, packaging details, or logos), and group together images that are likely to depict the same or highly similar products. This allows the system to cluster near-duplicates and lookalike items, even when sellers alter angles, lighting, or backgrounds to avoid detection.
How it works in the platform
Within Image View, the platform displays these AI-generated image groups. Instead of moderators reviewing hundreds of individual listings with the same product photo, they can review the entire group at once. When a moderator applies a label (e.g., “Counterfeit,” “Unauthorized Use,” “Parallel Import”), that label is applied to every listing in the group, drastically reducing repetitive work.
This strengthens auto-moderation by propagating infringing labels across all grouped images.
Related articles: Automated Image Feature Detection
AI recognition of brand marks
Logo detection uses AI to identify logos within images, even if they are resized, cropped, or altered.
Moderator support
It is a key tool for moderators, surfacing content that contains brand marks so they can quickly focus on the most relevant images and listings.
Part of the wider AI system
Logo detection also strengthens other AI functions:
Filtering – removing irrelevant content while keeping logo-bearing items in scope.
Auto-moderation rules – applying or suggesting labels when logos appear on unauthorized products.
Related articles: Automated Image Feature Detection
Automated labelling at scale
AI-powered auto-moderation rules can instantly label listings based on their features (such as product type, description patterns, or image attributes). For example, a listing with the keywords “replica,” “AAA quality,” or a seller watermark commonly associated with counterfeiters can be automatically tagged as “High Risk – Counterfeit.”
Contextual AI insights in moderator workflows
AI doesn’t just assign static labels, it provides insights directly within Post View and Image View to help moderators quickly validate or refine the categorization. For example:
In Post View, AI may highlight a suspicious phrase like “OEM version” or detect mismatched pricing (e.g., a luxury watch priced at $20), automatically suggesting a “Suspicious – Grey Market” label.
In Image View, AI may flag a product image that visually matches a known counterfeit pattern (such as identical packaging design but blurred logos) and recommend the “Counterfeit – Visual Match” label.
Related articles: Auto-moderation Rules Overview, Auto-moderation Rules Configuration, Labels Overview, Labels Configuration
Auto-moderation rules can be powered by AI features such as:
Image matching
Logo detection
NaveeGPT
Related articles: Auto-moderation Rules Overview, Auto-moderation Rules Configuration
All Meta ads are under facebook.com/ads. To search specifically for the platform the ad was on, apply the following filters on the Tags:
meta_platform:instagram
meta_platform:facebook
Related articles: How to Create a Saved Filter
Check the Saved Filters list → look for TO VALIDATE.
If it doesn’t exist, apply the following filters:
Moderation Status = Checked
Website = All Tracked Websites
Date Range = Select the period of interest
Related articles: How to Find Posts for Validation (Review), How to Mark Posts Validated (Reviewed), How to Create a Saved Filter
What happens when a listing is labeled as “Irrelevant,” and how does that differ from other labels?
When a listing is moderated, the choice you select (e.g., Relevant, Irrelevant, Counterfeit, Trademark Infringement, etc.) does more than just categorize that single post — it teaches the system how to treat similar listings in the future.
If marked as Relevant:
The system recognizes it as a potential infringement and continues surfacing similar listings for review.
This strengthens future detection patterns (keywords, imagery, seller behavior).
If marked as Irrelevant:
The system learns that this type of listing should not be flagged again.
Future detection adjusts to reduce false positives (e.g., filtering out certain keywords, product categories, or seller behaviors that don’t represent risk).
Over time, this improves precision by lowering noise in search results.
If marked under a specific infringement type (Counterfeit, Trademark Abuse, Copyright, etc.):
The system logs the violation category, which influences how Quality Analysts and Enforcers handle it.
These labels can also help the system cluster patterns (e.g., repeat counterfeit sellers, “compatible with” trademark misuse).
In short:
Relevant → strengthens detection of similar risks.
Irrelevant → teaches the system what to ignore, reducing future false positives.
Other infringement categories → define the nature of the violation, which shapes enforcement workflows and reporting.
Every moderation choice is a signal. Relevant keeps similar results coming, Irrelevant suppresses them, and specific infringement labels guide both reporting and enforcement. Together, these decisions fine-tune what the system prioritizes and ignores in future detection.
Related articles: Labels Overview, Labels Configuration
Apply the TRS Tag filter (takedown request sent).
Related articles: How to Find Enforced Data, How to Create a Saved Filter
Email notifications will be available in Q4 2025.