I read some comments about the multithread inference and generally is not good news .
how can I use the full power of the cpus of the edge machine to submit multiple input for inferencing?
Each get_feature(inference) took almost 1 sec in raspberry and I need to wait 5 seconds to get 5th input image while cpu is only %15 - 20 .
as you know we have limited with 1gig Ram. what is the best approach to get multiple inference at one time? (python)