Skip to content

[Community Event] Doc Tests Sprint #16292

@patrickvonplaten

Description

@patrickvonplaten

This issue is part of our Doc Test Sprint. If you're interested in helping out come join us on Discord and talk with other contributors!

Docstring examples are often the first point of contact when trying out a new library! So far we haven't done a very good job at ensuring that all docstring examples work correctly in 🤗 Transformers - but we're now very dedicated to ensure that all documentation examples work correctly by testing each documentation example via Python's doctest (https://docs.python.org/3/library/doctest.html) on a daily basis.

In short we should do the following for all models for both PyTorch and Tensorflow:

    • Check the current doc examples will run without failure
    • Add an expected output to the doc example and test it via Python's doc test (see Guide to contributing below)

Adding a documentation test for a model is a great way to better understand how the model works, a simple (possibly first) contribution to Transformers and most importantly a very important contribution to the Transformers community 🔥

If you're interested in adding a documentation test, please read through the Guide to contributing below.

This issue is a call for contributors, to make sure docstring exmaples of existing model architectures work correctly. If you wish to contribute, reply in this thread which architectures you'd like to take :)

Guide to contributing:

  1. Ensure you've read our contributing guidelines 📜

  2. Claim your architecture(s) in this thread (confirm no one is working on it) 🎯

  3. Implement the changes as in add doctests for bart like seq2seq models #15987 (see the diff on the model architectures for a few examples) 💪

    In addition, there are a few things we can also improve, for example :

    • Fix some style issues: for example, change ``decoder_input_ids``` to `decoder_input_ids`.
    • Using a small model checkpoint instead of a large one: for example, change "facebook/bart-large" to "facebook/bart-base" (and adjust the expected outputs if any)
  4. Open the PR and tag me @patrickvonplaten @ydshieh or @patil-suraj (don't forget to run make fixup before your final commit) 🎊

    • Note that some code is copied across our codebase. If you see a line like # Copied from transformers.models.bert..., this means that the code is copied from that source, and our scripts will automatically keep that in sync. If you see that, you should not edit the copied method! Instead, edit the original method it's copied from, and run make fixup to synchronize that across all the copies. Be sure you installed the development dependencies with pip install -e ".[dev]", as described in the contributor guidelines above, to ensure that the code quality tools in make fixup can run.

PyTorch Model Examples added to tests:

Tensorflow Model Examples added to tests:

  • ALBERT (@vumichien)
  • BART
  • BEiT
  • BERT (@vumichien)
  • Bert
  • BigBird (@vumichien)
  • BigBirdPegasus
  • Blenderbot
  • BlenderbotSmall
  • CamemBERT
  • Canine
  • CLIP (@Aanisha)
  • ConvBERT (@simonzli)
  • ConvNext
  • CTRL
  • Data2VecAudio
  • Data2VecText
  • DeBERTa
  • DeBERTa-v2
  • DeiT
  • DETR
  • DistilBERT (@jmwoloso)
  • DPR
  • ELECTRA (@bhadreshpsavani)
  • Encoder
  • FairSeq
  • FlauBERT
  • FNet
  • Funnel
  • GPT2 (@cakiki)
  • GPT-J (@cakiki)
  • Hubert
  • I-BERT
  • ImageGPT
  • LayoutLM
  • LayoutLMv2
  • LED
  • Longformer (@KMFODA)
  • LUKE
  • LXMERT
  • M2M100
  • Marian
  • MaskFormer (@reichenbch)
  • mBART
  • MegatronBert
  • MobileBERT (@vumichien)
  • MPNet
  • mT5
  • Nystromformer
  • OpenAI
  • OpenAI
  • Pegasus
  • Perceiver
  • PLBart
  • PoolFormer
  • ProphetNet
  • QDQBert
  • RAG
  • Realm
  • Reformer
  • ResNet
  • RemBERT
  • RetriBERT
  • RoBERTa (@patrickvonplaten)
  • RoFormer
  • SegFormer
  • SEW
  • SEW-D
  • SpeechEncoderDecoder
  • Speech2Text
  • Speech2Text2
  • Splinter
  • SqueezeBERT
  • Swin (@johko)
  • T5 (@MarkusSagen)
  • TAPAS
  • Transformer-XL (@simonzli)
  • TrOCR (@arnaudstiegler)
  • UniSpeech
  • UniSpeechSat
  • Van
  • ViLT
  • VisionEncoderDecoder
  • VisionTextDualEncoder
  • VisualBert
  • ViT (@johko)
  • ViTMAE
  • Wav2Vec2
  • WavLM
  • XGLM
  • XLM
  • XLM-RoBERTa (@AbinayaM02)
  • XLM-RoBERTa-XL
  • XLMProphetNet
  • XLNet
  • YOSO

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions