|
| 1 | +.. _library-openai: |
| 2 | + |
| 3 | +OpenAI |
| 4 | +====== |
| 5 | + |
| 6 | +OpenAI_ provides a client library for calling Large Language Models (LLMs). |
| 7 | + |
| 8 | +.. _OpenAI: https://github.com/openai/openai-python |
| 9 | + |
| 10 | +eli5 supports :func:`eli5.explain_prediction` for |
| 11 | +``ChatCompletion``, ``ChoiceLogprobs`` and ``openai.Client`` objects, |
| 12 | +highlighting tokens proportionally to the log probability. |
| 13 | +More likely tokens are highligted in green, |
| 14 | +while unlikely tokens are highlighted in red. |
| 15 | + |
| 16 | +Explaining with a client, invoking the model with ``logprobs`` enabled: |
| 17 | +:: |
| 18 | + |
| 19 | + import eli5 |
| 20 | + import opeanai |
| 21 | + client = openai.Client() |
| 22 | + prompt = 'some string' # or [{"role": "user", "content": "some string"}] |
| 23 | + explanation = eli5.explain_prediction(client, prompt, model='gpt-4o') |
| 24 | + explanation |
| 25 | + |
| 26 | +You may pass any extra keyword arguments to :func:`eli5.explain_prediction`, |
| 27 | +they would be passed to the ``client.chat.completions.create``, |
| 28 | +e.g. you may pass ``n=2`` to get multiple responses |
| 29 | +and see explanations for each of them. |
| 30 | + |
| 31 | +You'd normally want to run it in a Jupyter notebook to see the explanation |
| 32 | +formatted as HTML. |
| 33 | + |
| 34 | +You can access the ``Choice`` object on the ``explanation.targets[0].target``: |
| 35 | +:: |
| 36 | + |
| 37 | + explanation.targets[0].target.message.content |
| 38 | + |
| 39 | +If you have already obtained a chat completion with ``logprobs`` from OpenAI client, |
| 40 | +you may call :func:`eli5.explain_prediction` with |
| 41 | +``ChatCompletion`` or ``ChoiceLogprobs`` like this: |
| 42 | +:: |
| 43 | + |
| 44 | + chat_completion = client.chat.completions.create( |
| 45 | + messages=[{"role": "user", "content": prompt}], |
| 46 | + model="gpt-4o", |
| 47 | + logprobs=True, |
| 48 | + ) |
| 49 | + eli5.explain_prediction(chat_completion) # or |
| 50 | + eli5.explain_prediction(chat_completion.choices[0].logprobs) |
| 51 | + |
| 52 | +.. note:: |
| 53 | + While token probabilities reflect model uncertainty in many cases, |
| 54 | + they are not always indicative, |
| 55 | + e.g. in case of `Chain of Thought <https://arxiv.org/abs/2201.11903>`_ |
| 56 | + preceding the final response. |
| 57 | + |
| 58 | +.. note:: |
| 59 | + Top-level :func:`eli5.explain_prediction` calls are dispatched |
| 60 | + to :func:`eli5.llm.explain_prediction.explain_prediction_openai_client` |
| 61 | + or :func:`eli5.llm.explain_prediction.explain_prediction_openai_completion` |
| 62 | + or :func:`eli5.llm.explain_prediction.explain_prediction_openai_logprobs` |
| 63 | + . |
0 commit comments