Coder Social home page Coder Social logo

llm-claude's Introduction

llm-claude

PyPI Changelog Tests License

Plugin for LLM adding support for Anthropic's Claude models.

Installation

Install this plugin in the same environment as LLM.

llm install llm-claude

Configuration

You will need an API key from Anthropic. Request access at https://www.anthropic.com/earlyaccess then go to https://console.anthropic.com/account/keys

You can set that as an environment variable called ANTHROPIC_API_KEY, or add it to the llm set of saved keys using:

llm keys set claude
Enter key: <paste key here>

Usage

This plugin adds models called claude and claude-instant.

Anthropic describes them as:

two families of models, both of which support 100,000 token context windows:

  • Claude Instant: low-latency, high throughput
  • Claude: superior performance on tasks that require complex reasoning

You can query them like this:

llm -m claude-instant "Ten great names for a new space station"
llm -m claude "Compare and contrast the leadership styles of Abraham Lincoln and Boris Johnson."

Options

  • max_tokens_to_sample, default 10_000: The maximum number of tokens to generate before stopping

Use like this:

llm -m claude -o max_tokens_to_sample 20 "Sing me the alphabet"
 Here is the alphabet song:

A B C D E F G
H I J

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-claude
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest

llm-claude's People

Contributors

tomviner avatar

Stargazers

Dan avatar Corin Faife avatar John avatar Samuel Liedtke avatar Billy Rowell avatar Jan Z avatar Ladislav Prskavec avatar Robin Glauser avatar Baptiste Canton avatar compwron avatar Omar Farooq avatar Gregory M. Kapfhammer avatar  avatar  avatar 爱可可-爱生活 avatar Darian Moody avatar Jeff Triplett avatar Alex avatar THOMAS_THOMAS avatar  avatar Tim Kersey avatar Adrien Brault avatar Felipe Menegazzi avatar  avatar Jim Schwoebel avatar Yukai Huang avatar Dan W avatar Simon Willison avatar

Watchers

 avatar  avatar

Forkers

bderenzi jvmncs

llm-claude's Issues

Migrating from Text Completions to Messages

First of all, thanks for this very useful plugin!

Messages will soon replace Text Completions as the primary method to use Anthropic's Claude API. A full migration guide is available here.

Messages is more elegant to handle things like system prompts and user / assistant roles. Updating llm-claude with Messages should be very straightforward using the migration guide.

Is that something that's planned, or something you'd be willing to look at by any chance?

Getting API Error 400: Invalid request

The plugin used to work fine for me, but I now get errors whenever I try to pass a prompt. Here is what I get using the llm python API to pass a prompt to Claude:

Traceback (most recent call last):
  File "myscript.py", line 40, in gen_longsum
    initial_response = response.text()
                      ^^^^^^^^^^^^^^^
  File "\venv\Lib\site-packages\llm\models.py", line 112, in text
    self._force()
  File "\venv\Lib\site-packages\llm\models.py", line 106, in _force
    list(self)
  File "\venv\Lib\site-packages\llm\models.py", line 91, in __iter__
    for chunk in self.model.execute(
  File "\venv\Lib\site-packages\llm_claude\__init__.py", line 54, in execute
    completion = anthropic.completions.create(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "\venv\Lib\site-packages\anthropic\_utils\_utils.py", line 250, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "\venv\Lib\site-packages\anthropic\resources\completions.py", line 223, in create
    return self._post(
           ^^^^^^^^^^^
  File "\venv\Lib\site-packages\anthropic\_base_client.py", line 949, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "\venv\Lib\site-packages\anthropic\_base_client.py", line 748, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "\venv\Lib\site-packages\anthropic\_base_client.py", line 785, in _request
    raise self._make_status_error_from_response(request, err.response) from None
anthropic.BadRequestError: <html><title>Error 400 (Bad Request)!!1</title></html>

Note that I get the same error if I use llm in the command line like so:

llm -m claude "Compare and contrast the leadership styles of Abraham Lincoln and Boris Johnson."

I get this output:

Error: <html><title>Error 400 (Bad Request)!!1</title></html>

Anthropic's official documentation describes Error 400 as:

Invalid request: there was an issue with the format or content of your request.

Was there a breaking change in the past month that required updating the plugin to pass prompts in the right format, or is there somehow an issue on my end?

Human: / Assistant: formatting

llm-claude currently follows the advice at https://docs.anthropic.com/claude/reference/getting-started-with-the-api#prompt-formatting to:

when using the API you must format the prompts like:
\n\nHuman: Why is the sky blue?\n\nAssistant:

For reference, these are the prompt constants from https://github.com/tomviner/anthropic-sdk-python/blob/d9b7e30b26235860fe9e7e3053171615173e32ca/src/anthropic/_constants.py#L1

HUMAN_PROMPT = "\n\nHuman:"

AI_PROMPT = "\n\nAssistant:"

However, it's allowed to include text starting the AI assistant's answer for it, which isn't currently possible with llm-claude.

On the Ask Claude for rewrites docs page, they explain how starting the AI answer with special text, can assist with formatting it's response:



Human: {prompt listing some text and requesting a rewrite according to some rules}
Please put your rewrite in <rewrite></rewrite> tags.

Assistant: <rewrite>

this starts the assistant already in rewrite mode.

Couple example of starting the AI response here, when you only want a specific format:

And another starting in <thinking> mode: https://docs.anthropic.com/claude/docs/roleplay-dialogue#complex-customer-support-agent

Another:

Assistant: Here is a summary of the document:

from https://docs.anthropic.com/claude/docs/claude-is-hallucinating#document-summary

Why?

Claude has been trained and fine-tuned using RLHF (reinforcement learning with human feedback) methods on \n\nHuman: and \n\nAssistant: data like this, so you will need to use these prompts in the API in order to stay “on-distribution” and get the expected results. It's important to remember to have the two newlines before both Human and Assistant, as that's what it was trained on.

However, I don't think llm-claude should leave the human/assistant prefixes complete up to the end user, because their presence is actually enforced at the API level:

$ curl -s --request POST \
     --url https://api.anthropic.com/v1/complete \
     --header "anthropic-version: 2023-06-01" \
     --header "content-type: application/json" \
     --header "x-api-key: $ANTHROPIC_API_KEY" \
     --data '
{
  "model": "claude-instant-1",
  "prompt": "\n\nHuman: 1+1",
  "max_tokens_to_sample": 10
}
'
{"error":{"type":"invalid_request_error","message":"prompt must end with \"\n\nAssistant:\" turn"}}

From playing around like this, the rules appear to be:

  • prompt must start with {HUMAN_PROMPT}, well almost. Only 1 new line needed at the start. So "\nHuman:"
  • prompt may contain multiple human and AI prompt prefixes. This is how continued conversation are sent.
  • prompt must include at least one {AI_PROMPT} that doesn't have a {HUMAN_PROMPT} after it.

These rules are similarly listed at https://docs.anthropic.com/claude/docs/prompt-troubleshooting-checklist#the-prompt-is-formatted-correctly

anthropic-sdk-python conflicting requirements with llm

pip gives an error when installing both llm and anthropic (the client library for Claude).

However it does install both libraries

$ pip install llm

Successfully installed PyYAML-6.0 aiohttp-3.8.4 aiosignal-1.3.1 annotated-types-0.5.0 async-timeout-4.0.2 attrs-23.1.0 certifi-2023.5.7 charset-normalizer-3.2.0 click-8.1.5 click-default-group-wheel-1.2.2 frozenlist-1.4.0 idna-3.4 llm-0.5 multidict-6.0.4 openai-0.27.8 pluggy-1.2.0 pydantic-2.0.3 pydantic-core-2.3.0 python-dateutil-2.8.2 python-ulid-1.1.0 requests-2.31.0 six-1.16.0 sqlite-fts4-1.0.3 sqlite-utils-3.33 tabulate-0.9.0 tqdm-4.65.0 typing-extensions-4.7.1 urllib3-2.0.3 yarl-1.9.2

$ pip install anthropic

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
llm 0.5 requires pydantic>=2.0.0, but you have pydantic 1.10.11 which is incompatible.

Successfully installed anthropic-0.3.4 anyio-3.7.1 distro-1.8.0 exceptiongroup-1.1.2 h11-0.14.0 httpcore-0.17.3 httpx-0.24.1 pydantic-1.10.11 sniffio-1.3.0 tokenizers-0.13.3

The problem is llm requires pydantic v2:

llm 0.5 requires pydantic>=2.0.0

but anthropic has a pyproject with:

pydantic = "^1.9.0"

which is locked with the poetry pin:

[[package]]
name = "pydantic"
version = "1.10.2"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.