Solving problems in Machine Learning very often relates to the question how to represent the data. While in the past most problems were approached with manually crafted features, we observe a move towards learning algorithms that derive robust and informative features. The 5th International Conference on Learning Representations (ICLR) attracts researchers from all around the world to share their findings and discuss ideas.
In our project “BeesBook” we automatically recognize barcode-like markers on honeybees with deep neural networks. These, however, need a vast amount of training data (images of markers with their respective labels – IDs and rotation angles). Although deep convolutional networks would be very good in solving the problem, they require prohibitively large training sets, that are quite costly to obtain. A training set would have taken a single worker an entire year to produce. Our new method RenderGAN (Sixt et al. 2017) learns to generate realistic marker images – it learns the underlying structure of the barcode design and the particularities of the imaging process. Our paper on RenderGAN was accepted to the workshop track of ICLR and Leon and Ben were traveling to Toulon, France to present a poster on it.