Is it possible to run yolo model on video? is there an example to be shared?
Sure, that’s definitely possible. You’d need to loop through the frames of the video and apply the pre-trained YOLO model on each frame. You can then stitch the frames (with bounding boxes added) back into a video at the same frame rate. Other models exist for more temporally consistent results (i.e. less jittering of predictions between frames) such as Flow-Guided Feature Aggregation, but frame by frame predictions are a good place to start.
Thanks for the link to the script! Yes that’s what I was looking for and I was thinking/ hoping that would officially be a part of GluonCV lib