-
Notifications
You must be signed in to change notification settings - Fork 7
Open
Description
I want to re-encode a JPEG with the same settings it was encoded before. I have the quantization table extracted from the original image in it's either 8bit or 16bit format. But I'm failing to feed it back into jpeg-encoder such that I end up with a new JPEG that uses the same quantization table.
What exactly is the format that QuantizationTableType::Custom expects? Apparently it's not a QT scaled to u16. But u8 data also doesn't give the desired result.
I found this mysterious code
fn get_user_table(table: &[u16; 64]) -> [NonZeroU16; 64] {
let mut q_table = [NonZeroU16::new(1).unwrap(); 64];
for (i, &v) in table.iter().enumerate() {
q_table[i] = match NonZeroU16::new(v.clamp(1, 2 << 10) << 3) {
Some(v) => v,
None => panic!("Invalid quantization table value: {}", v),
};
}
q_table
}that seems to limit the input value to 2048 and then left shifts the complete thing by 3. Is that the key to understanding what the input format is?
Metadata
Metadata
Assignees
Labels
No labels