Autoscaling based on multiple cpu utilization for single process crawlers? #1119
Labels
solutioning
The issue is not being implemented but only analyzed and planned.
t-tooling
Issues with this label are in the ownership of the tooling team.
Currently the Autoscaled pool will try to scale up if the the cpu utilization is low. The problem can happen in situation where for example some http based crawler (basically single process crawler) runs in environment with multiple cpus. The other cpus will be underutilized and this will be reported to Autoscaled pool which can try to scale up (even though the relevant core is already fully utilized.)
This is probably not such a problem for any browser based crawler as the browsers are running in their own processes and can run on different cores.
Mentioned here: apify/apify-sdk-python#447 (comment)
Maybe we need more detailed information about the utilization so that the each crawler can decide what is relevant for it.
(Or possibly make crawlee in general capable of scaling up to multiple cpus?)
The text was updated successfully, but these errors were encountered: