March 2: Local Features: From SIFT to Differentiable

Vassileios Balntas

Abstract: Local feature matching is one of the cornerstones of “classical” computer vision. Despite the recent advances of deep learning, a crucial question still remains which is whether the learned local feature methods can outperform the classical methods. Initial results have shown that classical non-deep learning methods are still very competitive and can outperform recent deep learning methods [1, 2, 3]. However, recent work [4, 5] has focused on training end-to-end differentiable systems for particular applications, with promising results. This tutorial will aim to (1) present a historic overview of the classical methods (2) present an overview of how deep learning changed the classical methods (3) focus on modern end-to-end (e2e) methods that are trained in a fully differentiable manner (4) present several practical examples of end-to-end differentiable local features, using the recently introduced kornia library, that was created by 2 of the tutorial’s organisers. (5) present recommendations for practical use of local features and how they interact with other parts of image matching pipeline: image preprocessing, matching, RANSAC, etc.

March 3: Distributed Hyperparameter Optimization and Model Search with Examples Using SHADHO

Jeffrey Kinnison

Abstract: Computer vision as a field now relies on learning algorithms, particularly deep convolutional neural networks (CNNs), to solve problems like object classification, segmentation, style transfer, and super-resolution, to name a few. Selecting the correct algorithm, with the correct configurations, and training model to solve these problems is, however, difficult: to create the highest quality CNNs possible, one must first create an architecture, and then optimize the hyperparameters that control the learning process. The entire process of model selection is extremely time consuming because hyperparameters, and model performance in general, can only be evaluated experimentally by standard training and testing procedures. This tutorial will introduce distributed hyperparameter optimization (HPO) using the Scalable Hardware-Aware Distributed Hyperparameter Optimizer (SHADHO), an open source framework distributed model search that works with modern Python machine/deep learning code*. Through SHADHO, we will formulate the problem of HPO, demonstrate how to set up search spaces and distributed searches, and show off a variety of search algorithms available out of the box.