background preloader

Interesting_conf_papers

Facebook Twitter

Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials - densecrf.pdf. 12_Cogswell_SUNw.pdf. Www.cse.iitm.ac.in/~amittal/eccv2014.pdf. Graphics.cs.cmu.edu/projects/contextPrediction/contextPrediction.pdf. Www.cs.stevens.edu/~mordohai/public/Teran_3dInterestPoints14.pdf. Lmb.informatik.uni-freiburg.de/Publications/2014/Bro14/quiroga_sceneflow_eccv14.pdf.

Ladoga.graphics.cs.cmu.edu/xinleic/NEILWeb/papers/neil-seg.pdf. Web.cecs.pdx.edu/~fliu/papers/cvpr2014-depth.pdf. Linchao Bao - CityU CS. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014 Fast Edge-Preserving PatchMatch for Large Displacement Optical Flow Abstract We present a fast optical flow algorithm that can handle large displacement motions. Our algorithm is inspired by recent successes of local methods in visual correspondence searching as well as approximate nearest neighbor field algorithms. The main novelty is a fast randomized edge-preserving approximate nearest neighbor field algorithm which propagates self-similarity patterns in addition to offsets. Experimental results on public optical flow benchmarks show that our method is significantly faster than state-of-the-art methods without compromising on quality, especially when scenes contain large motions.

Downloads 1. Motivation We use approximate nearest neighbor field (NNF) for large displacement optical flow estimation in this work. Edge-Preserving PatchMatch Fast Algorithm Pipeline Performance Results on Benchmarks (shown as "EPPM") 1. Www.andrew.cmu.edu/user/rcabral/CabralFurukawaCVPR2014.pdf. Www.cvg.unibe.ch/dperrone/tvdb/perrone2014tv.pdf. Www.dsi.unive.it/~srotabul/files/publications/CVPR2014a.pdf. Pdf/1312.4659v3.pdf. Www.connellybarnes.com/work/publications/2014_camo.pdf. Faculty.ucmerced.edu/mhyang/papers/cvpr14_parsing.pdf. Pdf/1311.5591v2.pdf. Web.engr.oregonstate.edu/~sinisa/research/publications/cvpr14_mutex.pdf. Www.cs.cmu.edu/~hanbyulj/14/CVPR_2014_Visibility.pdf. Www.ri.cmu.edu/pub_files/2014/3/egpaper_final.pdf. Hci.iwr.uni-heidelberg.de/publications/mip/techrep/beier_14_cut.pdf. Www.igp.ethz.ch/photogrammetry/publications/pdf_folder/CVPR2014_Hartmann.pdf. Homes.esat.kuleuven.be/~krematas/imgSynth/ImageBasedSynthesis.pdf.

Homes.esat.kuleuven.be/~krematas/imgSynth/ImageBasedSynthesis.pdf. Www.cvl.isy.liu.se/research/objrec/visualtracking/colvistrack/CN_Tracking_CVPR14.pdf. Pdf/1311.5591v2.pdf. Pdf/1311.5591v2.pdf. Www.cs.berkeley.edu/~rbg/papers/r-cnn-cvpr.pdf. Files.is.tue.mpg.de/black/papers/RGA2014.pdf. TTommasi_ICCV_2013.pdf. Cbf_iccv13.pdf. [1304.5583] Distributed Low-rank Subspace Segmentation. Pub.lizhuwen.com/papers/moseg_iccv13.pdf. Www.vision.ee.ethz.ch/~boxavier/PID2973583.pdf. Research.microsoft.com/en-us/um/people/pkohli/papers/kks_iccv2013.pdf. Www.cs.cornell.edu/projects/disambig/files/disambig_iccv2013.pdf. Lmb.informatik.uni-freiburg.de/people/ummenhof/pub/ummenhofer2013Point.pdf. Www6.in.tum.de/Main/Publications/Heise2013.pdf. Lmb.informatik.uni-freiburg.de/people/ummenhof/pub/ummenhofer2013Point.pdf.

Large-Scale Multi-Resolution Surface Reconstruction from RGB-D Sequences. Faculty.ucmerced.edu/mhyang/papers/iccv13_registration.pdf. Www.robots.ox.ac.uk/~carl/papers/STAR3D_Final.pdf. Cvg.ethz.ch/mobile/LiveMetric3DReconstructionICCV2013.pdf. Www.audiolabs-erlangen.de/content/05-fau/professor/00-mueller/03-publications/2013_HeltenBMT_TrackingDepthInertial_ICCV.pdf. Files.is.tue.mpg.de/chwang/papers/CDC_HOMRF_ICCV13.pdf. Paul.rutgers.edu/~elqursh/papers/elqursh-iccv2013.pdf. Vis-www.cs.umass.edu/papers/motionSegmentationICCV2013.pdf. Xplore Full-Text PDF: In this paper we propose a new method for detecting multiple specific 3D objects in real time. We start from the template-based approach based on the LINE2D/LINEMOD representation introduced recently by Hinterstoisser et al., yet extend it in two ways. First, we propose to learn the templates in a discriminative fashion. We show that this can be done online during the collection of the example images, in just a few milliseconds, and has a big impact on the accuracy of the detector.

Second, we propose a scheme based on cascades that speeds up detection. Since detection of an object is fast, new objects can be added with very low cost, making our approach scale well. In our experiments, we easily handle 10-30 3D objects at frame rates above 10fps using a single CPU core. We outperform the state-of-the-art both in terms of speed as well as in terms of accuracy, as validated on 3 different datasets. BodisCVPR2014.pdf. P/tiny/tiny.pdf. Marc Pollefeys' homepage. Www.cs.unc.edu/~ezheng/cameraReady_enliangCVPR2014.pdf. Cvg.ethz.ch/mobile/MobilePhonesto3DScannersCVPR2014.pdf. Www.stat.ucla.edu/~yuille/Pubs10_12/BeatMTurkers_ChenFidlerYuilleUrtasun_CVPR2014.pdf.

Www.vision.ee.ethz.ch/~dragonr/1401.pdf. Www.stat.ucla.edu/~sczhu/papers/Conf_2014/Single_view_Scene_Reconstruction_cvpr2014.pdf. Www.cv-foundation.org/openaccess/content_iccv_2013/papers/Bradley_Local_Signal_Equalization_2013_ICCV_paper.pdf. Global Fusion of Relative Motions for Robust, Accurate and Scalable Structure from Motion. Multi-view structure from motion (SfM) estimates the position and orientation of pictures in a common 3D coordinate frame. When views are treated incrementally, this external calibration can be subject to drift, contrary to global methods that distribute residual errors evenly.

We propose a new global calibration approach based on the fusion of relative motions between image pairs. We improve an existing method for robustly computing global rotations. We present an efficient a contrario trifocal tensor estimation method, from which stable and precise translation directions can be extracted. We also define an efficient translation registration method that recovers accurate camera positions.These components are combined into an original SfM pipeline. Our experiments show that, on most datasets, it outperforms in accuracy other existing incremental and global pipelines. Hereby our key ingredients of our Global Structure from Motion chain: Convex Optimization Reduced Trifocal Tensor. Www.cv-foundation.org/openaccess/content_iccv_2013/papers/Chatterjee_Efficient_and_Robust_2013_ICCV_paper.pdf.

Lmb.informatik.uni-freiburg.de/people/ummenhof/pub/ummenhofer2013Point.pdf. Static.googleusercontent.com/media/research.google.com/en//pubs/archive/41413.pdf. Www.xiaoyumu.com/s//PDF/Regionlets.pdf. Web.mit.edu/vondrick/ihog/iccv.pdf. LEAR - DeepMatchingFlow. Philippe Weinzaepfel Jerome Revaud Zaid Harchaoui Cordelia Schmid Download Please note that our code is mentioned only for scientific or personal use.

If you have any question, please contact: Jerome Revaud for questions regarding the Deep Matching part (jerome.revaud (at) inria.fr), Philippe Weinzaepfel for questions about the DeepFlow part (philippe.weinzaepfel (at) inria.fr). Code for Deep Matching NEW See Jerome Revaud's webpage (NEW March 2014: C/C++ code is available). Source code in C for DeepFlow Once you have downloaded Deep Matching code, you can download DeepFlow code here here.

Source code in C for FastDeepFlow NEW FastDeepFlow takes benefit of SSE instructions to speed up (x2) the computation of DeepFlow. Matches used in ICCV'13 paper You can download the exact matches that we use in our ICCV'13 paper for MPI-Sintel training and test sets (all versions), and Kitti training and test sets. Citation If you use our code, please cite our paper: Articles Proc. Slides: ppt. Www.vision.ee.ethz.ch/~boxavier/PID2973583.pdf. Papers/elastic-fragments.pdf. Www.igp.ethz.ch/photogrammetry/publications/pdf_folder/iccv13_Vogel.pdf. Www1.coe.neu.edu/~cdicle/papers/dicle_iccv13.pdf.