Data handling in map_reduce api #1250
Replies: 1 comment 6 replies
-
Hi @nadeshseen
...
futures = fexec.map(....)
for f in futures:
print(f.stats)
In the api definition you can see all the avaiable parameters of the map_reduce() call. |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, it would be really helpful if someone could clarify these queries or point me in the right direction, thank you.
Setup - Openwhisk(Compute Backend) + Minio(Storage Backend)
What I can see from the given examples is that all the data for a particular key is read in mem. But this would cause an error. Example - Say, Input data size is of 10GB and my system can support only 10 mappers at a time. So, I want each of them to process 1 GB data. Also, each mapper are using container runtime with 256MB RAM.
MemoryError: ('HANDLER', 'Function exceeded maximum memory and was killed')".
obj_reduce_by_key
it won't be useful because it just works on separate input splits and gives separate results for each input split.Beta Was this translation helpful? Give feedback.
All reactions