Skip to content

langchain: catch if there are only system messages in a prompt for anthropic #30822

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

mathislindner
Copy link

PR message: Not sure if I put the check at the right spot, but I thought throwing the error before the loop made sense to me.
Description: Checks if there are only system messages using AnthropicChat model and throws an error if it's the case. Check Issue for more details
Issue: #30764

Copy link

vercel bot commented Apr 14, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langchain ✅ Ready (Inspect) Visit Preview 💬 Add feedback Apr 23, 2025 1:27pm

@dosubot dosubot bot added size:XS This PR changes 0-9 lines, ignoring generated files. langchain Related to the langchain package 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Apr 14, 2025
Copy link
Collaborator

@ccurme ccurme left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of blocking this ourselves, could we catch the error from Anthropic and add whatever clarifications we want to the error message? This way if Anthropic adds support we will not be blocking.

Here's an example:

def _handle_openai_bad_request(e: openai.BadRequestError) -> None:

Will need to handle for all invocation modes (sync/async invoke and stream)

@ccurme ccurme self-assigned this Apr 14, 2025
@mathislindner
Copy link
Author

okay, I just thought it would be wiser to do this before sending a call to Anthropic that would get us a bad request, but yes it does makes sense to expect them to change it down the line :)

@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. and removed size:XS This PR changes 0-9 lines, ignoring generated files. labels Apr 14, 2025
@mathislindner
Copy link
Author

mathislindner commented Apr 14, 2025

from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
from langchain.chat_models import init_chat_model
import asyncio

LLM = init_chat_model("claude-3-5-haiku-20241022", model_provider = "anthropic")
prompt = SystemMessage("test")

def test_generate():
    try:
        _ = LLM._generate([prompt])
    except ValueError as e:
        assert str(e) == "Received only system message(s). "
        
def test_stream():
    try:
        stream = LLM._stream([prompt])
        for _ in stream:
            pass
    except ValueError as e:
        assert str(e) == "Received only system message(s). "

async def test_agenerate():
    try:
        _ = await LLM._agenerate([prompt])
    except ValueError as e:
        assert str(e) == "Received only system message(s). "

async def test_astream():
    try:
        stream = LLM._astream([prompt])
        async for _ in stream:
            pass
    except ValueError as e:
        assert str(e) == "Received only system message(s). "
        
if __name__ == "__main__":
    test_generate()
    test_stream()
    asyncio.run(test_agenerate())
    asyncio.run(test_astream())

throws the errors, wasn't sure if you wanted tests but this works now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature langchain Related to the langchain package size:M This PR changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants