Paper Reading AI Learner

Neural Particle Image Velocimetry

2021-01-28 12:03:39
Nikolay Stulov, Michael Chertkov

Abstract

In the past decades, great progress has been made in the field of optical and particle-based measurement techniques for experimental analysis of fluid flows. Particle Image Velocimetry (PIV) technique is widely used to identify flow parameters from time-consecutive snapshots of particles injected into the fluid. The computation is performed as post-processing of the experimental data via proximity measure between particles in frames of reference. However, the post-processing step becomes problematic as the motility and density of the particles increases, since the data emerges in extreme rates and volumes. Moreover, existing algorithms for PIV either provide sparse estimations of the flow or require large computational time frame preventing from on-line use. The goal of this manuscript is therefore to develop an accurate on-line algorithm for estimation of the fine-grained velocity field from PIV data. As the data constitutes a pair of images, we employ computer vision methods to solve the problem. In this work, we introduce a convolutional neural network adapted to the problem, namely Volumetric Correspondence Network (VCN) which was recently proposed for the end-to-end optical flow estimation in computer vision. The network is thoroughly trained and tested on a dataset containing both synthetic and real flow data. Experimental results are analyzed and compared to that of conventional methods as well as other recently introduced methods based on neural networks. Our analysis indicates that the proposed approach provides improved efficiency also keeping accuracy on par with other state-of-the-art methods in the field. We also verify through a-posteriori tests that our newly constructed VCN schemes are reproducing well physically relevant statistics of velocity and velocity gradients.

Abstract (translated)

URL

https://arxiv.org/abs/2101.11950

PDF

https://arxiv.org/pdf/2101.11950.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot