|
| 1 | +.. _library-openai: |
| 2 | + |
| 3 | +OpenAI |
| 4 | +====== |
| 5 | + |
| 6 | +OpenAI_ provides a client library for calling Large Language Models (LLMs). |
| 7 | + |
| 8 | +.. _OpenAI: https://github.com/openai/openai-python |
| 9 | + |
| 10 | +eli5 supports :func:`eli5.explain_prediction` for |
| 11 | +``ChatCompletion``, ``ChoiceLogprobs`` and ``openai.Client`` objects, |
| 12 | +highlighting tokens proportionally to the log probability, |
| 13 | +which can help to see where model is less confident in it's predictions. |
| 14 | +More likely tokens are highlighted in green, |
| 15 | +while unlikely tokens are highlighted in red: |
| 16 | + |
| 17 | +.. image:: ../static/llm-explain-logprobs.png |
| 18 | + :alt: LLM token probabilities visualized |
| 19 | + |
| 20 | +Explaining with a client, invoking the model with ``logprobs`` enabled: |
| 21 | +:: |
| 22 | + |
| 23 | + import eli5 |
| 24 | + import opeanai |
| 25 | + client = openai.Client() |
| 26 | + prompt = 'some string' # or [{"role": "user", "content": "some string"}] |
| 27 | + explanation = eli5.explain_prediction(client, prompt, model='gpt-4o') |
| 28 | + explanation |
| 29 | + |
| 30 | +You may pass any extra keyword arguments to :func:`eli5.explain_prediction`, |
| 31 | +they would be passed to the ``client.chat.completions.create``, |
| 32 | +e.g. you may pass ``n=2`` to get multiple responses |
| 33 | +and see explanations for each of them. |
| 34 | + |
| 35 | +You'd normally want to run it in a Jupyter notebook to see the explanation |
| 36 | +formatted as HTML. |
| 37 | + |
| 38 | +You can access the ``Choice`` object on the ``explanation.targets[0].target``: |
| 39 | +:: |
| 40 | + |
| 41 | + explanation.targets[0].target.message.content |
| 42 | + |
| 43 | +If you have already obtained a chat completion with ``logprobs`` from OpenAI client, |
| 44 | +you may call :func:`eli5.explain_prediction` with |
| 45 | +``ChatCompletion`` or ``ChoiceLogprobs`` like this: |
| 46 | +:: |
| 47 | + |
| 48 | + chat_completion = client.chat.completions.create( |
| 49 | + messages=[{"role": "user", "content": prompt}], |
| 50 | + model="gpt-4o", |
| 51 | + logprobs=True, |
| 52 | + ) |
| 53 | + eli5.explain_prediction(chat_completion) # or |
| 54 | + eli5.explain_prediction(chat_completion.choices[0].logprobs) |
| 55 | + |
| 56 | + |
| 57 | +See the :ref:`tutorial <explain-llm-logprobs-tutorial>` for a more detailed usage |
| 58 | +example. |
| 59 | + |
| 60 | +.. note:: |
| 61 | + While token probabilities reflect model uncertainty in many cases, |
| 62 | + they are not always indicative, |
| 63 | + e.g. in case of `Chain of Thought <https://arxiv.org/abs/2201.11903>`_ |
| 64 | + preceding the final response. |
| 65 | + |
| 66 | +.. note:: |
| 67 | + Top-level :func:`eli5.explain_prediction` calls are dispatched |
| 68 | + to :func:`eli5.llm.explain_prediction.explain_prediction_openai_client` |
| 69 | + or :func:`eli5.llm.explain_prediction.explain_prediction_openai_completion` |
| 70 | + or :func:`eli5.llm.explain_prediction.explain_prediction_openai_logprobs` |
| 71 | + . |
0 commit comments