There are a few reasons:

Reason 1: you attempted to crawl a domain without entering : www

for example, you enter:

but the links on that page are all pointing to URLs with : HTTP://www.

so our crawler is very STRICT in that sense and sees that 2 different domains.
It has a simple rule to ONLY crawl within the domain you set to crawl.

So those links are skipped!

Solution: Crawl with www

Reason 2:
Our crawler only crawled URLs from the domain you set to crawl.
All other external links ( including subdomains ) will be ignored ❌
All dynamic JS URLs will be ignored ( here is a Technical reason & solution 🤓)

If you view the page-source*, you will see only these standard URLs:

Reason 3: You entered a URL with a /directory/ in it.
More here >

Did this answer your question?