Skip to content

[Bug] LiteLLM exception when using OpenAI Speech-To-Text models #8996

@LizardBlizzard

Description

@LizardBlizzard

What happened?

I'm trying to use DSPy for an audio transcription task. From what I see in the documentation, this should be supported with a dedicated dsp.yAudio input class.
When I execute the attached code example I receive the following error:

litellm.exceptions.BadRequestError: litellm.BadRequestError: OpenAIException - This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?

I do manage to use OpenAI's speech-to-text models using LiteLLM directly though:

from litellm import transcription
from pathlib import Path

audio_file = Path("example_001.m4a").open("rb")
response = transcription(model="gpt-4o-transcribe", file=audio_file)
print(f"response: {response}")

Am I missing anything or is this a bug with the way LiteLLM is used behind the scenes?

Steps to reproduce

from pathlib import Path

import dspy


class TranscribeSignature(dspy.Signature):
    audio: dspy.Audio = dspy.InputField()
    transcription: str = dspy.OutputField()


if __name__ == "__main__":
    audio_path = Path("example_001.m4a")
    audio = dspy.Audio.from_file(audio_path)

    lm = dspy.LM(model="gpt-4o-mini-transcribe")
    dspy.configure(lm=lm)

    transcriber = dspy.Predict(TranscribeSignature)

    transcription = transcriber(audio=audio)  # <- raises litellm.exceptions.BadRequestError
    print(transcription)

DSPy version

3.0.3

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions