Filtering models
AI-powered filtering models automatically remove irrelevant content from the scraping pipeline and platform. This means we can cast a wider net with broader or generic keywords (e.g., “cream,” “bag,” or “sneakers”) without being flooded by unrelated results. For example, if you search “cream”, AI will filter out results about dessert recipes and only keep listings that relate to cosmetics or skincare products.
Deep semantic detection
AI doesn’t just match keywords, it understands context. Deep semantic detection allows the system to navigate the internet and highlight content that is most likely infringing, even if sellers avoid obvious keywords. For instance:
A counterfeit handbag listed as “luxury leather tote” without brand names can still be flagged because the imagery, style, and descriptors match patterns of known infringements.
Tutorials or files titled “design toolkit” might be identified as unauthorized versions of branded training material, even though the brand name isn’t mentioned.
Image matching
AI also compares visual features—such as shapes, colors, and packaging details—to group identical or near-identical product images. This ensures that large volumes of duplicate or lookalike listings are discovered together. For instance:
A seller uploads dozens of fake sneaker listings with slightly cropped or resized images—image matching clusters them into one group, making them easier to flag, label (infringing or not) and review.
Together
Filtering, semantic detection, and image matching work in tandem: filtering removes noise, semantic detection uncovers hidden risks in text, and image matching captures visual duplicates at scale—ensuring broader, smarter, and more efficient discovery of infringing content.
AI-driven grouping of visually similar content
Image matching uses AI models to analyze and compare visual features (such as shape, colour patterns, textures, packaging details, or logos), and group together images that are likely to depict the same or highly similar products. This allows the system to cluster near-duplicates and lookalike items, even when sellers alter angles, lighting, or backgrounds to avoid detection.
How it works in the platform
Within Image View, the platform displays these AI-generated image groups. Instead of moderators reviewing hundreds of individual listings with the same product photo, they can review the entire group at once. When a moderator applies a label (e.g., “Counterfeit,” “Unauthorized Use,” “Parallel Import”), that label is applied to every listing in the group, drastically reducing repetitive work.
This strengthens auto-moderation by propagating infringing labels across all grouped images.
Related articles: Automated Image Feature Detection
AI recognition of brand marks
Logo detection uses AI to identify logos within images, even if they are resized, cropped, or altered.
Moderator support
It is a key tool for moderators, surfacing content that contains brand marks so they can quickly focus on the most relevant images and listings.
Part of the wider AI system
Logo detection also strengthens other AI functions:
Filtering – removing irrelevant content while keeping logo-bearing items in scope.
Auto-moderation rules – applying or suggesting labels when logos appear on unauthorized products.
Related articles: Automated Image Feature Detection
Automated labelling at scale
AI-powered auto-moderation rules can instantly label listings based on their features (such as product type, description patterns, or image attributes). For example, a listing with the keywords “replica,” “AAA quality,” or a seller watermark commonly associated with counterfeiters can be automatically tagged as “High Risk – Counterfeit.”
Contextual AI insights in moderator workflows
AI doesn’t just assign static labels, it provides insights directly within Post View and Image View to help moderators quickly validate or refine the categorization. For example:
In Post View, AI may highlight a suspicious phrase like “OEM version” or detect mismatched pricing (e.g., a luxury watch priced at $20), automatically suggesting a “Suspicious – Grey Market” label.
In Image View, AI may flag a product image that visually matches a known counterfeit pattern (such as identical packaging design but blurred logos) and recommend the “Counterfeit – Visual Match” label.
Related articles: Auto-moderation Rules Overview, Auto-moderation Rules Configuration, Labels Overview, Labels Configuration
Auto-moderation rules can be powered by AI features such as:
Image matching
Logo detection
AI Image Feature
Related articles: Auto-moderation Rules Overview, Auto-moderation Rules Configuration
If a user tags another in an item, the tagged user will be notified via email.
The Notice View is a centralized hub for managing, tracking, and sending legal notices related to enforcement actions. Key features include:
View and manage all notices with unique IDs and clear enforcement stages
Review enforcement details and send notices directly to platforms
Log manual takedowns
See all related items and communication threads in one view
Pre-configured platform messages with flexibility to edit and attach documentation
Related articles: Enforcement Workflow, The Notice View
Zeal uses a range of unblocking and localisation techniques to maximise coverage and reduce detection. These include:
Geo-distributed proxies
User-agent rotation
Time-based scraping to better mimic natural platform behaviour
To manage platform sensitivity and performance, we maintain a controlled baseline on higher-risk platforms (e.g. running the same core keyword daily) while simultaneously applying our broader discovery methods. These include:
Reverse image search
Generic scraping
AI-generated keyword expansion
Dynamic scheduling
By introducing more intelligence into our scraping behaviour — rather than relying on static, repetitive queries — we increase both coverage and precision. This allows us to surface a significantly higher volume of relevant infringements across target platforms compared to previous approaches.