Draft - Distilled Recurrent All-Pairs Field Transforms For Optical Flow

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

International audience

Similar Papers
  • Research Article
  • Cite Count Icon 178
  • 10.1016/j.cub.2009.07.057
Optic Flow Processing for the Assessment of Object Movement during Ego Movement
  • Aug 20, 2009
  • Current Biology
  • Paul A Warren + 1 more

Optic Flow Processing for the Assessment of Object Movement during Ego Movement

  • Research Article
  • Cite Count Icon 8
  • 10.1002/rob.22065
Characteristics of optical flow from aerial thermal imaging, “thermal flow”
  • Feb 6, 2022
  • Journal of Field Robotics
  • Tran Xuan Bach Nguyen + 5 more

This study explores the utility of optical flow calculated from thermal imaging cameras, “thermal flow,” mounted on an aircraft for localization in day and night conditions. Our sensor implementation utilizes a long wave infrared (LWIR) micro sensor to capture sequences of thermal images and an on‐board computer to compute an optical flow estimate. We compared the performance of optical flow from the LWIR camera with the output of visible spectrum optical flow sensor. Flights were conducted spanning a 24 h window to explore how thermal flow performs relative to optical flow as the ground heats and cools. Agreement between optical and thermal flow was found during daylight when both sensors were functional. Additionally, thermal flow results were reliable in the middle of the day through to late evening, gradually degrading until shortly after sunrise.

  • Conference Article
  • Cite Count Icon 2
  • 10.1117/12.186007
<title>Kalman filter for improving optical flow accuracy along moving boundaries</title>
  • Sep 16, 1994
  • Proceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE
  • J N Pan + 1 more

Optical flow is an important source of information of motion and structure of objects in 3D world. Once the optical flow field is computed accurately, the measurement of image velocity can be used widely in many tasks in computer vision area. Current computer vision techniques require that the relative errors in the optical flow be less than 10%. However, to reduce error in optical flow determination is still a difficult problem. In this paper, we propose a Kalman filtering for improving accuracy in determining optical flow along moving boundaries. Firstly, a quantitative analysis on the error decreasing rate in iteratively determining optical flow using the correlation-based technique is given. It concludes that this error decreasing rate is varied for different regions in an image plane: it is larger for the regions where intensity varies more drastically, it is smaller for those where intensity varies more smoothly. This indicates that the iterations needed in optical flow determination should not be uniform for different image regions. That is, for the moving boundaries, where intensity usually changes bigger, less iterations are needed than for other regions. This is reasonable. In fact, the confidence measure is usually high along moving boundaries since richer information exists there. Therefore, an optical flow algorithm needs to have less iterations along moving boundaries than in other areas so that the better estimation of optical flow along boundaries can be propagated into other areas instead of being blurred by those in other areas. Secondly, we propose a Kalman filter to realize the task of applying different number of necessary iterations in determining optical flow to deblur boundary and enhance accuracy. Loosely speaking, the idea is whenever the standard deviation of optical flow at a pixel is less than certain criterion, i.e., good accuracy has been achieved, the Kalman filter will not further update optical flow at this pixel, thus conserving accuracy along moving boundaries. Assuming that estimated optical flow field is contaminated by a Gaussian white noise, we give appropriate considerations to the system and measurement noise covariance matrices, Q and R, respectively. In this way, the Kalman filter is used to eliminate noise, raise accuracy and refine accuracy along discontinuities. Finally, an experiment is presented to demonstrate the efficiency of our Kalman filter. Two objects are considered. One is stationary, while another is in translation. Unified optical flow field (UOFF) quantities are determined by using the proposed technique. The 3D position and speeds are then estimated by using UOFF approach. Both results obtained with and without the Kalman filter are given. A more than 10% improvement is achieved in this experiment. It is expected that the more moving boundaries in the scene, the more effectively the scheme works.

  • Research Article
  • Cite Count Icon 13
  • 10.1109/taffc.2022.3197622
Self-Supervised Approach for Facial Movement Based Optical Flow
  • Oct 1, 2022
  • IEEE Transactions on Affective Computing
  • Muhannad Alkaddour + 2 more

Computing optical flow is a fundamental problem in computer vision. However, deep learning-based optical flow techniques do not perform well for non-rigid movements such as those found in faces, primarily due to lack of the training data representing the fine facial motion. We hypothesize that learning optical flow on face motion data will improve the quality of predicted flow on faces. This work aims to: (1) exploring self-supervised techniques to generate optical flow ground truth for face images; (2) computing baseline results on the effects of using face data to train Convolutional Neural Networks (CNN) for predicting optical flow; and (3) using the learned optical flow in micro-expression recognition to demonstrate its effectiveness. We generate optical flow ground truth using facial key-points in the BP4D-Spontaneous dataset. This optical flow is used to train the FlowNetS architecture to test its performance on the Extended Cohn-Kanade dataset and a portion of the generated dataset. The performance of FlowNetS trained on face images surpassed that of other optical flow CNN architectures. Our optical flow features are further compared with other methods using the STSTNet micro-expression classifier, and the results indicate that the optical flow obtained using this work has promising applications in facial expression analysis.

  • Research Article
  • Cite Count Icon 79
  • 10.1007/s00371-018-1477-y
Learning deep facial expression features from image and optical flow sequences using 3D CNN
  • May 4, 2018
  • The Visual Computer
  • Jianfeng Zhao + 2 more

Facial expression is highly correlated with the facial motion. According to whether the temporal information of facial motion is used or not, the facial expression features can be classified as static and dynamic features. The former, which mainly includes the geometric features and appearance features, can be extracted by convolution or other learning filters; the latter, which are aimed to model the dynamic properties of facial motion, can be calculated through optical flow or other methods, respectively. When 3D convolutional neural networks (CNNs) are introduced, the extraction of two different types of features mentioned above becomes easy. In this paper, one 3D CNN architecture is presented to learn the static and dynamic features from facial image sequences and extract high-level dynamic features from optical flow sequences. Two types of dense optical flow, which contain the tracking information of facial muscle movement, are calculated according to different image pair construction methods. One is the common optical flow, and the other is an enhanced optical flow which is called accumulative optical flow. Four components of each type of optical flow are used in experiments. Three databases, two acted databases and one nearly realistic database, are selected to conduct the experiments. The experiments on the two acted databases achieve state-of-the-art accuracy, and indicate that the vertical component of optical flow has an advantage over other components in recognizing facial expression. The experimental results on the three selected databases show that more discriminative features can be learned from image sequences than from optical flow or accumulative optical flow sequences, and the accumulative optical flow contains more motion information than optical flow if the frame distance of the image pairs used to calculate them is not too large.

  • Research Article
  • Cite Count Icon 105
  • 10.1109/tip.2008.925381
Occlusion-Aware Optical Flow Estimation
  • Aug 1, 2008
  • IEEE Transactions on Image Processing
  • S Ince + 1 more

Optical flow can be reliably estimated between areas visible in two images, but not in occlusion areas. If optical flow is needed in the whole image domain, one approach is to use additional views of the same scene. If such views are unavailable, an often-used alternative is to extrapolate optical flow in occlusion areas. Since the location of such areas is usually unknown prior to optical flow estimation, this is usually performed in three steps. First, occlusion-ignorant optical flow is estimated, then occlusion areas are identified using the estimated (unreliable) optical flow, and, finally, the optical flow is corrected using the computed occlusion areas. This approach, however, does not permit interaction between optical flow and occlusion estimates. In this paper, we permit such interaction by proposing a variational formulation that jointly computes optical flow, implicitly detects occlusions and extrapolates optical flow in occlusion areas. The extrapolation mechanism is based on anisotropic diffusion and uses the underlying image gradient to preserve structure, such as optical flow discontinuities. Our results show significant improvements in the computed optical flow fields over other approaches, both qualitatively and quantitatively.

  • Conference Article
  • 10.1117/12.444085
Optical flow measurment on Boolean edge detection and Hough transform
  • Oct 4, 2001
  • Proceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE
  • Muhammad B Ahmad + 4 more

Motion estimation is one of the fundamental problems in digital video processing. One of the most notable approaches of motion estimation is based on the estimation of a measure of the change of image brightness in the frame sequence commonly referred to as optical flow. The classical approaches for finding optical flow have many drawbacks. The numerical methods or least square methods for solving optical flow constrains are susceptible to errors in the cases of occlusion and of noise. Two moving objects having common border causes confliction in the velocities, and taking their averages yields a less satisfactory optical flow estimation. The wrong detection of moving boundary, as motion is usually not homogeneous and the inexact contour measurements of moving objects are the other problems of optical flow methods. Therefore, information such as color and edges along with optical flow has been used in the literature. Further, the classical methods need lot of calculations and computations for optical flow measurements. In this paper, we proposed a method, which is very fast and gives better moving information of the objects in the image sequences. The possible locations of moving objects are found first, and then we apply the Hough Transform only on the detected moving regions to find the optical flow vectors for those regions only. So we save lot of time for not finding optical flow for the still or background parts in the image sequences. The new Boolean based edge detection is applied on the two consecutive input images, and then the differential edge image of the resulting two edge maps is found. A mask for detecting the moving regions is made by dilating the differential edge image. After getting the moving regions in the image sequence with the help of the mask obtained already, we use the Hough Transform and voting accumulation methods for solving optical flow constraint equations. The voting based Hough transform avoids the errors associated with least squares techniques. Calculation of a large number of points along the constraint line is also avoided by using the transformed slope-intercept parameter domain. The simulation results show that the proposed method is very effective for extracting optical flow vectors and hence tracking moving objects in the images.

  • Research Article
  • Cite Count Icon 12
  • 10.3390/s20226567
Optical and Mass Flow Sensors for Aiding Vehicle Navigation in GNSS Denied Environment
  • Nov 17, 2020
  • Sensors (Basel, Switzerland)
  • Mohamed Moussa + 5 more

Nowadays, autonomous vehicles have achieved a lot of research interest regarding the navigation, the surrounding environmental perception, and control. Global Navigation Satellite System/Inertial Navigation System (GNSS/INS) is one of the significant components of any vehicle navigation system. However, GNSS has limitations in some operating scenarios such as urban regions and indoor environments where the GNSS signal suffers from multipath or outage. On the other hand, INS standalone navigation solution degrades over time due to the INS errors. Therefore, a modern vehicle navigation system depends on integration between different sensors to aid INS for mitigating its drift during GNSS signal outage. However, there are some challenges for the aiding sensors related to their high price, high computational costs, and environmental and weather effects. This paper proposes an integrated aiding navigation system for vehicles in an indoor environment (e.g., underground parking). This proposed system is based on optical flow and multiple mass flow sensors integrations to aid the low-cost INS by providing the navigation extended Kalman filter (EKF) with forward velocity and change of heading updates to enhance the vehicle navigation. The optical flow is computed for frames taken using a consumer portable device (CPD) camera mounted in the upward-looking direction to avoid moving objects in front of the camera and to exploit the typical features of the underground parking or tunnels such as ducts and pipes. On the other hand, the multiple mass flow sensors measurements are modeled to provide forward velocity information. Moreover, a mass flow differential odometry is proposed where the vehicle change of heading is estimated from the multiple mass flow sensors measurements. This integrated aiding system can be used for unmanned aerial vehicles (UAV) and land vehicle navigations. However, the experimental results are implemented for land vehicles through the integration of CPD with mass flow sensors to aid the navigation system.

  • Supplementary Content
  • Cite Count Icon 1
  • 10.26083/tuprints-00019455
Probabilistic Optical Flow and its Image-Adaptive Refinement
  • Jan 1, 2021
  • TUbilio (Technical University of Darmstadt)
  • Anne S Wannenwetsch

Optical flow estimation, i.e. the prediction of motion in an image sequence, is an essential problem in low-level computer vision. Optical flow serves particularly as an input for many other tasks such as navigation, object tracking, or image registration. In the estimation of flow fields, certain image regions are particularly challenging due to task-inherent difficulties such as illumination changes and occlusions as well as common prediction mistakes, e.g. for large displacements or near motion boundaries. Therefore, the reliability of optical flow estimates varies heavily across the image domain. The first part of this thesis thus focuses on probabilistic optical flow methods, which predict a posterior distribution over the flow field conditioned on the input images. The first proposed method obtains probabilistic estimates by using variational inference to approximate a posterior derived from energy-based optical flow formulations. With ProbFlow, a fully probabilistic optical flow approach shows for the first time competitive results on popular benchmark datasets. The model-inherent confidence measure performs superior in comparison to previous work and the uncertainties are beneficially applied to improve optical flow estimates and a subsequent motion segmentation. In a follow-up work, SVIGL is developed to combine stochastic approaches for variational inference with gradient linearization - a frequently used procedure in optical flow energy methods due to its good optimization properties. SVIGL shows faster convergence and higher robustness than standard approaches for stochastic variational inference of complex posteriors. Moreover, it provides probabilistic optical flow without the tedious derivation of update equations required in ProbFlow while maintaining comparable performance. Although confidence measures detect unreliable regions, they do not directly improve the estimated flow fields. The second part of this thesis thus targets the refinement of optical flow in the context of neural networks. Here, the input images guide the post-processing as they provide valuable information about the structure of correct predictions. The first approach builds on an existing method for image-adaptive convolutions in a high-dimensional space. This space is spanned by feature dimensions that are now learned from data to improve the concept of pixel similarity used in the filtering operation. When applying the so-called semantic lattice to replace the bilinear upsampling step of state-of-the-art deep networks, one sees a clear improvement of the predictions, in particular at motion boundaries. In the last contribution, the two goals of this thesis are combined and per-pixel confidence estimates are leveraged for the image-adaptive refinement of deep optical flow predictions. As such, the proposed probabilistic pixel-adaptive convolutions (PPACs) do not only weigh pixels in a neighborhood according to learned similarity characteristics but also based on their individual reliability. The proposed PPAC refinement networks lead to substantial improvements in comparison to the underlying optical flow estimates. The obtained results are state-of-the-art on several benchmarks and show smooth flow fields with crisp boundaries as well as improved results in unreliable regions.

  • Research Article
  • Cite Count Icon 11
  • 10.1109/tpami.2021.3130302
Optical Flow in the Dark
  • Dec 1, 2022
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Mingfang Zhang + 2 more

Optical flow estimation in low-light conditions is a challenging task for existing methods and current optical flow datasets lack low-light samples. Even if the dark images are enhanced before estimation, which could achieve great visual perception, it still leads to suboptimal optical flow results because information like motion consistency may be broken during the enhancement. We propose to apply a novel training policy to learn optical flow directly from new synthetic and real low-light images. Specifically, first, we design a method to collect a new optical flow dataset in multiple exposures with shared optical flow pseudo labels. Then we apply a two-step process to create a synthetic low-light optical flow dataset, based on an existing bright one, by simulating low-light raw features from the multi-exposure raw images we collected. To extend the data diversity, we also include published low-light raw videos without optical flow labels. In our training pipeline, with the three datasets, we create two teacher-student pairs to progressively obtain optical flow labels for all data. Finally, we apply a mix-up training policy with our diversified datasets to produce low-light-robust optical flow models for release. The experiments show that our method can relatively maintain the optical flow accuracy as the image exposure descends and the generalization ability of our method is tested with different cameras in multiple practical scenes.

  • Research Article
  • Cite Count Icon 58
  • 10.1016/j.applanim.2013.02.001
In search of the behavioural correlates of optical flow patterns in the automated assessment of broiler chicken welfare
  • Mar 5, 2013
  • Applied Animal Behaviour Science
  • Marian Stamp Dawkins + 3 more

In search of the behavioural correlates of optical flow patterns in the automated assessment of broiler chicken welfare

  • Research Article
  • Cite Count Icon 1
  • 10.1108/02644400610671108
A method for prediction and estimation of large‐amplitude optical flows via extended Kalman filtering approach
  • Jul 1, 2006
  • Engineering Computations
  • Oleg Michailovich + 1 more

PurposeThis paper seeks to develop a reliable and computationally efficient method for estimating and predicting large‐amplitude optical flows via taking into consideration their coherence along the time dimension.Design/methodology/approachAlthough the differential‐based techniques for estimating optical flows have long been in wide use owing to the relative simplicity of their mathematical description, their applicability is known to be limited to the situations, when the optical flow has a relatively small norm. In order to extend such method to deal with large‐amplitude optical flows, it is proposed to model the optical flow as a composition of its time‐delayed version and a complementary optical flow. The former is used to predict the current optical flow and, subsequently, to warp forward the preceding image of the tracking sequence, while the latter accounts for the residual displacements that are estimated using Kalman filtering based on the “small norm” assumption.FindingsThe study shows that taking into consideration the temporal coherence of optical flows results in considerable improvement in the quality of their estimation in the case when the amplitude of the optical flow is relatively large and, hence, the “small norm” assumption is not applicable.Research limitations/implicationsIn the present work, the algorithm is formulated under the assumption that the optical flow is affine. This assumption may be restrictive in practice. Consequently, an important direction to extend this work is to consider more general classes of optical flows.Originality/valueThe main contribution of the present study is the use of multigrid methods and a projection scheme to relate the state equation to the apparent image motion.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 22
  • 10.3390/ani10020323
Utilization of Optical Flow Algorithms to Monitor Development of Tail Biting Outbreaks in Pigs
  • Feb 18, 2020
  • Animals : an Open Access Journal from MDPI
  • Yuzhi Z Li + 2 more

Simple SummaryOptical flow is a measurement of movement of individual objects in a group and can be used to monitor activity changes in both humans and animals. Using optical flow to monitor activity changes in pigs has not yet been reported. In this study, behavior of pigs in four pens of 30 pigs was video-recorded. The video-recordings before and during the first outbreak of tail biting were viewed manually to register active and resting behaviors of pigs. The same video-segments for behavioral evaluation were used for calculation of optical flow. Results indicate that mean optical flow was higher three days before and during the day of the tail-biting outbreak, suggesting increased activity level, compared to 10 days before the outbreak. All optical flow measures were correlated with time spent standing by pigs, indicating that movement during standing was associated with optical flow measures. These results suggest that optical flow measures might be a useful tool for automatically detecting activity changes associated with onset of tail-biting outbreaks.A study was conducted to evaluate activity changes in pigs associated with the development of tail-biting outbreaks using optical flow algorithms. Pigs (n = 120; initial body weight = 25 ± 2.9 kg) housed in four pens of 30 pigs were studied for 13 weeks. Outbreaks of tail biting were registered through daily observations. Behavior of pigs in each pen was video-recorded. Three one-hour video segments, representing morning, noon, and afternoon on days 10, 7, and 3 before and during the first outbreak of tail biting were scanned at 5-min intervals to estimate time budget for lying, standing, eating, drinking, pig-directed behavior, and tail biting. The same video segments were analyzed for optical flow. Mean optical flow was higher three days before and during the tail-biting outbreak, compared to 10 days before the outbreak (p < 0.05), suggesting that pigs may increase their activity three days before tail-biting outbreaks. All optical flow measures (mean, variance, skewness, and kurtosis) were correlated (all p < 0.01) with time spent standing, indicating that movement during standing may be associated with optical flow measures. These results suggest that optical flow might be a promising tool for automatically monitoring activity changes to predict tail-biting outbreaks in pigs.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/icaccaf.2017.8344729
Densely sampled noiseless optical flow for motion based visual activity analysis
  • Sep 1, 2017
  • Naresh Kumar

Video based activity analysis is quite interesting area of research in computer vision and machine learning community due to having great impact for solving video surveillance, system monitoring and social media analytics problems. Optical flow estimation provides a novel benchmark for motion based human activity analysis in video sequences. Optical flow is a method to reflect the changes between two image sequences due to the variation space and time parameters of the objects. Varying motion parameters in an image sequence, makes harder to compute dense flow in the optical flow of pixels. Determining optical flow is easier by Horn-Shunck and Lucas-Kanade methods due to its dependency on similarity in nature of reflected light from both the images. Dense optical flow is assured to smooth by Horn-Shunck methods but lacking the neighboring pixel information. For noise removal Lucas-Kanade method is successful but due to small range of velocity, it fails to provide dense optical flow. In this work, by pointing these issues we introduced the smoothness constraint to find the grey level corners and smooth the optical flow across edges. Finally we combine this approach with Nagel and Horn-Shunck methods to get dense and noiseless optical flow. This approach gives promising results for smooth optical flow by preserving discontinuity at corners where pixel velocity sharply changes.

  • Research Article
  • Cite Count Icon 11
  • 10.1016/j.patrec.2024.03.022
Joint facial action unit recognition and self-supervised optical flow estimation
  • Mar 27, 2024
  • Pattern Recognition Letters
  • Zhiwen Shao + 4 more

Joint facial action unit recognition and self-supervised optical flow estimation

Save Icon
Up Arrow
Open/Close