I’m brand new to event-driven architecture. I’ve done my best to get a lay of the land learning about things like producers, consumers, backpressure, synchronization, reactive streams, etc… I’ve also come across a few (seemingly) relevant Python frameworks: quix-streams, dask, faust, and RxPY. However, my use case has additional complications that my readings so far haven’t addressed.
Context
I’m working on a real-time computer vision pipeline. I have frames coming from one (or more) cameras that are used to reconstruct a scene in real-time. Later in the pipeline, the reconstructed scene is monitored and events are fired that trigger real-time alerts to the user.
The scene reconstruction pipeline consists of many tasks (e.g. object detection models, point cloud construction, etc…) with various dependencies between them. Eventually, this leads to a final task with input from a few prior tasks and a single output: the reconstructed scene. Most of my complications arise in this pipeline.
- When a frame comes, multiple tasks can start processing it. If each of these tasks is thought of as a consumer, it’s not feasible to manage backpressure for each separately. There needs to be consistency in the frame dropping logic since all these consumers will eventually result in that single scene reconstruction output.
- Some tasks in the pipeline will depend on the result from processing the previous frame.
- For tasks that don’t depend on the result from a previous frame (e.g. object detection model inference), it should be possible for multiple frames to be processed at once to increase throughput.
Questions
I know this is a lot. Thank you for reading this far! I have a few questions. Answers to any would be much appreciated!
- Are there any resources you’d recommend that would be particularly relevant to my use case? Are there other subreddits that might be worth posing this question?
- Is there something fundamentally wrong with my approach here? I wasn’t able to find much about event-drive architecture as it relates to real-time computer vision systems.
- Of the frameworks I mentioned, has anyone spent time using them? Is there one in particular that stands out as a good candidate for my use case? Are there others I should look into?
submitted by /u/luv-music-will-travl
[link] [comments]
r/learnpython I’m brand new to event-driven architecture. I’ve done my best to get a lay of the land learning about things like producers, consumers, backpressure, synchronization, reactive streams, etc… I’ve also come across a few (seemingly) relevant Python frameworks: quix-streams, dask, faust, and RxPY. However, my use case has additional complications that my readings so far haven’t addressed. Context I’m working on a real-time computer vision pipeline. I have frames coming from one (or more) cameras that are used to reconstruct a scene in real-time. Later in the pipeline, the reconstructed scene is monitored and events are fired that trigger real-time alerts to the user. The scene reconstruction pipeline consists of many tasks (e.g. object detection models, point cloud construction, etc…) with various dependencies between them. Eventually, this leads to a final task with input from a few prior tasks and a single output: the reconstructed scene. Most of my complications arise in this pipeline. When a frame comes, multiple tasks can start processing it. If each of these tasks is thought of as a consumer, it’s not feasible to manage backpressure for each separately. There needs to be consistency in the frame dropping logic since all these consumers will eventually result in that single scene reconstruction output. Some tasks in the pipeline will depend on the result from processing the previous frame. For tasks that don’t depend on the result from a previous frame (e.g. object detection model inference), it should be possible for multiple frames to be processed at once to increase throughput. Questions I know this is a lot. Thank you for reading this far! I have a few questions. Answers to any would be much appreciated! Are there any resources you’d recommend that would be particularly relevant to my use case? Are there other subreddits that might be worth posing this question? Is there something fundamentally wrong with my approach here? I wasn’t able to find much about event-drive architecture as it relates to real-time computer vision systems. Of the frameworks I mentioned, has anyone spent time using them? Is there one in particular that stands out as a good candidate for my use case? Are there others I should look into? submitted by /u/luv-music-will-travl [link] [comments]
I’m brand new to event-driven architecture. I’ve done my best to get a lay of the land learning about things like producers, consumers, backpressure, synchronization, reactive streams, etc… I’ve also come across a few (seemingly) relevant Python frameworks: quix-streams, dask, faust, and RxPY. However, my use case has additional complications that my readings so far haven’t addressed.
Context
I’m working on a real-time computer vision pipeline. I have frames coming from one (or more) cameras that are used to reconstruct a scene in real-time. Later in the pipeline, the reconstructed scene is monitored and events are fired that trigger real-time alerts to the user.
The scene reconstruction pipeline consists of many tasks (e.g. object detection models, point cloud construction, etc…) with various dependencies between them. Eventually, this leads to a final task with input from a few prior tasks and a single output: the reconstructed scene. Most of my complications arise in this pipeline.
- When a frame comes, multiple tasks can start processing it. If each of these tasks is thought of as a consumer, it’s not feasible to manage backpressure for each separately. There needs to be consistency in the frame dropping logic since all these consumers will eventually result in that single scene reconstruction output.
- Some tasks in the pipeline will depend on the result from processing the previous frame.
- For tasks that don’t depend on the result from a previous frame (e.g. object detection model inference), it should be possible for multiple frames to be processed at once to increase throughput.
Questions
I know this is a lot. Thank you for reading this far! I have a few questions. Answers to any would be much appreciated!
- Are there any resources you’d recommend that would be particularly relevant to my use case? Are there other subreddits that might be worth posing this question?
- Is there something fundamentally wrong with my approach here? I wasn’t able to find much about event-drive architecture as it relates to real-time computer vision systems.
- Of the frameworks I mentioned, has anyone spent time using them? Is there one in particular that stands out as a good candidate for my use case? Are there others I should look into?
submitted by /u/luv-music-will-travl
[link] [comments]