I was just testing out the new /crawl endpoint. I made a successful crawl job and some other failed attempts that just returned some 400 errors, as expected.
I started to get 426 errors because I guess I exceeded the max 6 requests per minute. The problem is that the
retry-after
retry-after
header is showing
17280
17280
. Do I really need to wait almost 5 hours before trying again?
The requests I was making were completely legit and I wasn't abusing the endpoint at all. Has anyone else experienced this issue?