Relates to: #12 (comment)
In the new tokenizer, I decided:
- To use only byte vectors for integer and string tokens.
- Not to limit the size of the integer.
I wanted to keep this first tokenizer version as simple as possible, but we can discuss other @da2ce7's proposals like:
#[derive(Debug)]
pub enum Bencode<'a, R: BufRead + 'a> {
Integer(BigInt),
ByteString(Vec<u8>),
List(BencodeListIterator<'a, R>),
Dictionary(BencodeDictIterator<'a, R>),
}
- The implementation limits the Integer to 1024bit.