Neural Network Quantization and Compression with Tijmen Blankevoort - TWIML Talk #292
Neural Network Quantization and Compression with Tijmen Blankevoort - TWIML Talk #292  
Podcast: The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Published On: Mon Aug 19 2019
Description: Today we’re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. In our conversation with Tijmen we discuss:  • The ins and outs of compression and quantization of ML models, specifically NNs, • How much models can actually be compressed, and the best way to achieve compression,  • We also look at a few recent papers including “Lottery Hypothesis."