Using a combination of video cameras and event-based cameras to extract meaningful motion information on complex animal behavior
Complex behavior of animals is often analyzed from video recordings because cameras provide an economical and non-invasive way to acquire abundant data. Hence, developing computer vision tools to extract relevant information from such a rich yet raw data source is essential to support behavioral analysis. We propose to develop computer vision algorithms using a combination of video (i.e., frame-based) cameras and event-based cameras in order to extract meaningful motion information in individuals (in isolation or as part of groups). Both sensor types are complementary: event-based cameras excel at capturing high-frequency temporal content, while traditional cameras are better at acquiring slowly-varying content.
Event-based cameras are novel, biologically-inspired sensors that mimic the transient pathway of the human visual system. These cameras respond to motion in the form of brightness changes (called “events”) at any pixel in time. They can capture the dynamics of a scene with high dynamic range and temporal resolution, without suffering from motion blur, as opposed to traditional cameras. Additionally, these cameras allow us to record only motion information, which we will exploit for long-term tracking and better segmentation of behaviors.
We consider an active vision approach, where the viewpoint of the camera can vary to improve the tracking performance. This system will enable robust detection of individuals regardless of their 3D location and avoid target disappearance during long-term tracking. In an analysis phase, motion tracks will help categorize relevant behavior of the interactions of individuals.
Related Publications +
2756394
proj036
apa
50
creator
desc
year
20210
https://www.scienceofintelligence.de/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3A%22zotpress-ab8ce1700010c8ab8716707d65ed3de3%22%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22FJYW22RH%22%2C%22library%22%3A%7B%22id%22%3A2756394%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Shiba%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EShiba%2C%20S.%2C%20Klose%2C%20Y.%2C%20Aoki%2C%20Y.%2C%20%26amp%3B%20Gallego%2C%20G.%20%282024%29.%20Secrets%20of%20Event-based%20Optical%20Flow%2C%20Depth%20and%20Ego-motion%20Estimation%20by%20Contrast%20Maximization.%20%3Ci%3EIEEE%20Transactions%20on%20Pattern%20Analysis%20and%20Machine%20Intelligence%3C%5C%2Fi%3E%2C%201%26%23x2013%3B18.%20%3Ca%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTPAMI.2024.3396116%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTPAMI.2024.3396116%3C%5C%2Fa%3E%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Secrets%20of%20Event-based%20Optical%20Flow%2C%20Depth%20and%20Ego-motion%20Estimation%20by%20Contrast%20Maximization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shintaro%22%2C%22lastName%22%3A%22Shiba%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yannick%22%2C%22lastName%22%3A%22Klose%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yoshimitsu%22%2C%22lastName%22%3A%22Aoki%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillermo%22%2C%22lastName%22%3A%22Gallego%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222024%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTPAMI.2024.3396116%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fgithub.com%5C%2Ftub-rip%5C%2Fevent_based_optical_flow%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-09-12T15%3A30%3A44Z%22%7D%7D%2C%7B%22key%22%3A%22K4LV5U5M%22%2C%22library%22%3A%7B%22id%22%3A2756394%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Shiba%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EShiba%2C%20S.%2C%20Hamann%2C%20F.%2C%20Aoki%2C%20Y.%2C%20%26amp%3B%20Gallego%2C%20G.%20%282023%29.%20Event-based%20Background-Oriented%20Schlieren.%20%3Ci%3EIEEE%20Transactions%20on%20Pattern%20Analysis%20and%20Machine%20Intelligence%3C%5C%2Fi%3E%2C%201%26%23x2013%3B16.%20%3Ca%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTPAMI.2023.3328188%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTPAMI.2023.3328188%3C%5C%2Fa%3E%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Event-based%20Background-Oriented%20Schlieren%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shintaro%22%2C%22lastName%22%3A%22Shiba%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Friedhelm%22%2C%22lastName%22%3A%22Hamann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yoshimitsu%22%2C%22lastName%22%3A%22Aoki%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillermo%22%2C%22lastName%22%3A%22Gallego%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222023%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTPAMI.2023.3328188%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fgithub.com%5C%2Ftub-rip%5C%2Fevent_based_bos%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-09-12T15%3A30%3A44Z%22%7D%7D%2C%7B%22key%22%3A%22EERRTEQM%22%2C%22library%22%3A%7B%22id%22%3A2756394%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hamann%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHamann%2C%20F.%2C%20Ghosh%2C%20S.%2C%20Mart%26%23xED%3Bnez%2C%20I.%20J.%2C%20Hart%2C%20T.%2C%20Kacelnik%2C%20A.%2C%20%26amp%3B%20Gallego%2C%20G.%20%282024%29.%20Low-power%2C%20Continuous%20Remote%20Behavioral%20Localization%20with%20Event%20Cameras.%20%3Ci%3ECVPR%3C%5C%2Fi%3E.%20%3Ca%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2312.03799%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2312.03799%3C%5C%2Fa%3E%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Low-power%2C%20Continuous%20Remote%20Behavioral%20Localization%20with%20Event%20Cameras%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Friedhelm%22%2C%22lastName%22%3A%22Hamann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Suman%22%2C%22lastName%22%3A%22Ghosh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ignacio%20Ju%5Cu00e1rez%22%2C%22lastName%22%3A%22Mart%5Cu00ednez%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tom%22%2C%22lastName%22%3A%22Hart%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alex%22%2C%22lastName%22%3A%22Kacelnik%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillermo%22%2C%22lastName%22%3A%22Gallego%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222024%22%2C%22proceedingsTitle%22%3A%22CVPR%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2312.03799%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Ftub-rip.github.io%5C%2Feventpenguins%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-09-12T15%3A30%3A36Z%22%7D%7D%2C%7B%22key%22%3A%2283DHRT9M%22%2C%22library%22%3A%7B%22id%22%3A2756394%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hamann%20and%20Gallego%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHamann%2C%20F.%2C%20%26amp%3B%20Gallego%2C%20G.%20%282022%29.%20%3Ci%3EStereo%20Co-capture%20System%20for%20Recording%20and%20Tracking%20Fish%20with%20Frame-%20and%20Event%20Cameras%3C%5C%2Fi%3E.%20International%20Conference%20on%20Pattern%20Recognition%20%28ICPR%29%2C%20Workshop%20on%20Visual%20observation%20and%20analysis%20of%20Vertebrate%20And%20Insect%20Behavior.%20%3Ca%20href%3D%27https%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2207.07332%27%3Ehttps%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2207.07332%3C%5C%2Fa%3E%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22document%22%2C%22title%22%3A%22Stereo%20Co-capture%20System%20for%20Recording%20and%20Tracking%20Fish%20with%20Frame-%20and%20Event%20Cameras%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Friedhelm%22%2C%22lastName%22%3A%22Hamann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillermo%22%2C%22lastName%22%3A%22Gallego%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2207.07332%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-09-12T15%3A30%3A39Z%22%7D%7D%5D%7D
Shiba, S., Klose, Y., Aoki, Y., & Gallego, G. (2024). Secrets of Event-based Optical Flow, Depth and Ego-motion Estimation by Contrast Maximization.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–18.
https://doi.org/10.1109/TPAMI.2024.3396116
Shiba, S., Hamann, F., Aoki, Y., & Gallego, G. (2023). Event-based Background-Oriented Schlieren.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–16.
https://doi.org/10.1109/TPAMI.2023.3328188
Hamann, F., Ghosh, S., Martínez, I. J., Hart, T., Kacelnik, A., & Gallego, G. (2024). Low-power, Continuous Remote Behavioral Localization with Event Cameras.
CVPR.
https://doi.org/10.48550/arXiv.2312.03799
Hamann, F., & Gallego, G. (2022).
Stereo Co-capture System for Recording and Tracking Fish with Frame- and Event Cameras. International Conference on Pattern Recognition (ICPR), Workshop on Visual observation and analysis of Vertebrate And Insect Behavior.
https://arxiv.org/abs/2207.07332