This repository was archived by the owner on Oct 1, 2020. It is now read-only.

Description
Hi - is there a plan to support nn.ConvTranspose2d for both QNNPACK and FBGEMM? I am using this in my model. After applying QNNPACK quantization techniques (both post training static and QAT), I am getting huge drop in accuracy, as well as not great improvement in speed. I suspect it could be due to ConvTranspose2d.
I am using Unet model for semantic segmentation. In the upsampling side, there are series of layers each having ConvTranspose2d operation. In the quantized model, it has to keep de-quantizing weights/activations to float for ConvTranspose2d layer and then back to quantized form for subsequent convolution/BN/Relu layers.