Coder Social home page Coder Social logo

davidmigloz / langchain_dart Goto Github PK

View Code? Open in Web Editor NEW
344.0 16.0 62.0 13.69 MB

Build LLM-powered Dart/Flutter applications.

Home Page: https://langchaindart.dev

License: MIT License

Dart 99.91% Kotlin 0.01% Ruby 0.04% Swift 0.02% Objective-C 0.01% HTML 0.03%
ai generative-ai llms nlp dart flutter

langchain_dart's Introduction

πŸ¦œοΈπŸ”— LangChain.dart

tests docs langchain MIT

Build LLM-powered Dart/Flutter applications.

What is LangChain.dart?

LangChain.dart is an unofficial Dart port of the popular LangChain Python framework created by Harrison Chase.

LangChain provides a set of ready-to-use components for working with language models and a standard interface for chaining them together to formulate more advanced use cases (e.g. chatbots, Q&A with RAG, agents, summarization, translation, extraction, recsys, etc.).

The components can be grouped into a few core modules:

LangChain.dart

  • πŸ“ƒ Model I/O: LangChain offers a unified API for interacting with various LLM providers (e.g. OpenAI, Google, Mistral, Ollama, etc.), allowing developers to switch between them with ease. Additionally, it provides tools for managing model inputs (prompt templates and example selectors) and parsing the resulting model outputs (output parsers).
  • πŸ“š Retrieval: assists in loading user data (via document loaders), transforming it (with text splitters), extracting its meaning (using embedding models), storing (in vector stores) and retrieving it (through retrievers) so that it can be used to ground the model's responses (i.e. Retrieval-Augmented Generation or RAG).
  • πŸ€– Agents: "bots" that leverage LLMs to make informed decisions about which available tools (such as web search, calculators, database lookup, etc.) to use to accomplish the designated task.

The different components can be composed together using the LangChain Expression Language (LCEL).

Motivation

Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP), serving as essential components in a wide range of applications, such as question-answering, summarization, translation, and text generation.

The adoption of LLMs is creating a new tech stack in its wake. However, emerging libraries and tools are predominantly being developed for the Python and JavaScript ecosystems. As a result, the number of applications leveraging LLMs in these ecosystems has grown exponentially.

In contrast, the Dart / Flutter ecosystem has not experienced similar growth, which can likely be attributed to the scarcity of Dart and Flutter libraries that streamline the complexities associated with working with LLMs.

LangChain.dart aims to fill this gap by abstracting the intricacies of working with LLMs in Dart and Flutter, enabling developers to harness their combined potential effectively.

Packages

LangChain.dart has a modular design that allows developers to import only the components they need. The ecosystem consists of several packages:

Contains only the core abstractions as well as LangChain Expression Language as a way to compose them together.

Depend on this package to build frameworks on top of LangChain.dart or to interoperate with it.

Contains higher-level and use-case specific chains, agents, and retrieval algorithms that are at the core of the application's cognitive architecture.

Depend on this package to build LLM applications with LangChain.dart.

This package exposes langchain_core so you don't need to depend on it explicitly.

Contains third-party integrations and community-contributed components that are not part of the core LangChain.dart API.

Depend on this package if you want to use any of the integrations or components it provides.

Integration-specific packages

Popular third-party integrations (e.g. langchain_openai, langchain_google, langchain_ollama, etc.) are moved to their own packages so that they can be imported independently without depending on the entire langchain_community package.

Depend on an integration-specific package if you want to use the specific integration.

Package Version Description
langchain_core langchain_core Core abstractions and LCEL
langchain langchain Higher-level and use-case specific chains, agents, and retrieval algorithms
langchain_community langchain_community Third-party integrations (without specific packages) and community-contributed components
langchain_openai langchain_openai OpenAI integration (GPT-3.5 Turbo, GPT-4, GPT-4 Turbo, Embeddings, Tools, Vision, DALLΒ·E 3, etc.) and OpenAI Compatible services (TogetherAI, Anyscale, OpenRouter, One API, Groq, Llamafile, GPT4All, etc.)
langchain_google langchain_google Google integration (GoogleAI, VertexAI, Gemini, PaLM 2, Embeddings, Vector Search, etc.)
langchain_firebase langchain_firebase Firebase integration (VertexAI for Firebase (Gemini 1.5 Pro, Gemini 1.5 Flash, etc.))
langchain_ollama langchain_ollama Ollama integration (Llama 3, Phi-3, WizardLM-2, Mistral 7B, Gemma, CodeGemma, Command R, LLaVA, DBRX, Qwen 1.5, Dolphin, DeepSeek Coder, Vicuna, Orca, etc.)
langchain_mistralai langchain_mistralai Mistral AI integration (Mistral-7B, Mixtral 8x7B, Mixtral 8x22B, Mistral Small, Mistral Large, embeddings, etc.).
langchain_pinecone langchain_pinecone Pinecone vector database integration
langchain_chroma langchain_chroma Chroma vector database integration
langchain_supabase langchain_supabase Supabase Vector database integration

Functionality provided by each integration package:

Package LLMs Chat models Embeddings Vector stores Chains Agents Tools
langchain_community
langchain_openai βœ” βœ” βœ” βœ” βœ” βœ”
langchain_google βœ” βœ” βœ” βœ”
langchain_firebase βœ”
langchain_ollama βœ” βœ” βœ”
langchain_mistralai βœ” βœ”
langchain_pinecone βœ”
langchain_chroma βœ”
langchain_supabase βœ”

The following packages are maintained (and used internally) by LangChain.dart, although they can also be used independently:

Package Version Description
anthropic_sdk_dart anthropic_sdk_dart Anthropic (Claude API) client
chromadb chromadb Chroma DB API client
googleai_dart googleai_dart Google AI for Developers (Gemini API) client
mistralai_dart mistralai_dart Mistral AI API client
ollama_dart ollama_dart Ollama API client
openai_dart openai_dart OpenAI API client
vertex_ai vertex_ai GCP Vertex AI API client

Getting started

To start using LangChain.dart, add langchain as a dependency to your pubspec.yaml file. Also, include the dependencies for the specific integrations you want to use (e.g.langchain_community, langchain_openai, langchain_google, etc.):

dependencies:
  langchain: {version}
  langchain_community: {version}
  langchain_openai: {version}
  langchain_google: {version}
  ...

The most basic building block of LangChain.dart is calling an LLM on some prompt. LangChain.dart provides a unified interface for calling different LLMs. For example, we can use ChatGoogleGenerativeAI to call Google's Gemini model:

final model = ChatGoogleGenerativeAI(apiKey: googleApiKey);
final prompt = PromptValue.string('Hello world!');
final result = await model.invoke(prompt);
// Hello everyone! I'm new here and excited to be part of this community.

But the power of LangChain.dart comes from chaining together multiple components to implement complex use cases. For example, a RAG (Retrieval-Augmented Generation) pipeline that would accept a user query, retrieve relevant documents from a vector store, format them using prompt templates, invoke the model, and parse the output:

// 1. Create a vector store and add documents to it
final vectorStore = MemoryVectorStore(
  embeddings: OpenAIEmbeddings(apiKey: openaiApiKey),
);
await vectorStore.addDocuments(
  documents: [
    Document(pageContent: 'LangChain was created by Harrison'),
    Document(pageContent: 'David ported LangChain to Dart in LangChain.dart'),
  ],
);

// 2. Define the retrieval chain
final retriever = vectorStore.asRetriever();
final setupAndRetrieval = Runnable.fromMap<String>({
  'context': retriever.pipe(
    Runnable.mapInput((docs) => docs.map((d) => d.pageContent).join('\n')),
  ),
  'question': Runnable.passthrough(),
});

// 3. Construct a RAG prompt template
final promptTemplate = ChatPromptTemplate.fromTemplates([
  (ChatMessageType.system, 'Answer the question based on only the following context:\n{context}'),
  (ChatMessageType.human, '{question}'),
]);

// 4. Define the final chain
final model = ChatOpenAI(apiKey: openaiApiKey);
const outputParser = StringOutputParser<ChatResult>();
final chain = setupAndRetrieval
    .pipe(promptTemplate)
    .pipe(model)
    .pipe(outputParser);

// 5. Run the pipeline
final res = await chain.invoke('Who created LangChain.dart?');
print(res);
// David created LangChain.dart

Documentation

Community

Stay up-to-date on the latest news and updates on the field, have great discussions, and get help in the official LangChain.dart Discord server.

LangChain.dart Discord server

Contribute

πŸ“’ Call for Collaborators πŸ“’
We are looking for collaborators to join the core group of maintainers.

New contributors welcome! Check out our Contributors Guide for help getting started.

Join us on Discord to meet other maintainers. We'll help you get your first contribution in no time!

Related projects

Sponsors

License

LangChain.dart is licensed under the MIT License.

langchain_dart's People

Contributors

alfredobs97 avatar davidmigloz avatar dependabot[bot] avatar derek-x-wang avatar dileep9490 avatar f3ath avatar faithoflifedev avatar havendv avatar itp2023 avatar kndpt avatar luisredondo avatar matteodg avatar mauricepoirrier avatar mirkancal avatar mthongvanh avatar orenagiv avatar walsha2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

langchain_dart's Issues

Support streaming for LLMs and Chat models

OpenAI

By default, when you request a completion from the OpenAI, the entire completion is generated before being sent back in a single response. If you're generating long completions, waiting for the response can take many seconds.

To get responses sooner, you can 'stream' the completion as it's being generated. This allows you to start printing or processing the beginning of the completion before the full completion is finished.

To stream completions, set stream=True when calling the chat completions or completions endpoints. This will return an object that streams back the response as data-only server-sent events. Extract chunks from the delta field rather than the message field.

VertexAI

Document translation

In the Quickstart and 'Getting Started'

final llm = OpenAI(apiKey: openaiApiKey, temperature: 0.9);

We can now call it on some input!

final text = 'What would be a good company name for a company that makes colorful socks?';
print(await llm(prompt: text)); // 'Feetful of Fun'

The line print(await llm(prompt: text));

Trying to invoke an object as a function won't work in Dart of course, but I think is a simple translation error from Python docs. I don't really know Python, but apparently:

Python has a set of built-in methods and call is one of them. The call method enables Python programmers to write classes where the instances behave like functions and can be called like a function. When the instance is called as a function; if this method is defined, x(arg1, arg2, ...) is a shorthand for x.call(arg1, arg2, ...).

Possible deployment error

I installed the library from pub.dev: langchain: ^0.0.1-dev.1

I am getting a compile error, with call (and generate) not recognised as methods within the second line of:

  final OpenAI llm = OpenAI(apiKey: openaiApiKey, temperature: 0.9);
  print(await llm.call(prompt: 'x'));

I drilled through to langchain.dart, which contains only:

library  

  export 'src/langchain_base.dart';

which in turn contains only:

class LangChain {
  bool get isAwesome => true;
}

The code I cloned from the 'main' branch, langchain.dart contains:

/// Build powerful LLM-based Dart/Flutter applications.
library;

export 'src/llms/base.dart';
export 'src/schema.dart';

and src/langchain_base.dart' contains code for BaseLLM etc.

Seems like what was deployed to pub.dev is behind the repository code

Add support for PipelinePromptTemplate class

A prompt template for composing multiple prompts together.

This can be useful when you want to reuse parts of prompts.
A PipelinePrompt consists of two main parts:
- final_prompt: This is the final prompt that is returned
- pipeline_prompts: This is a list of tuples, consisting
of a string (name) and a Prompt Template.
Each PromptTemplate will be formatted and then passed
to future prompt templates as a variable with
the same name as name

https://python.langchain.com/en/latest/modules/prompts/prompt_templates/examples/prompt_composition.html?highlight=PipelinePromptTemplate#prompt-composition

Support estimating the number of tokens for a given prompt

LangChain.py provides the following API for estimating how many tokens a piece of text will be in a certain model.

llm.get_num_tokens("what a joke") // 3

Knowing the number of tokens is important to optimize costs or maximize the prompt based on the max context length of the model.

LangChain.py uses tiktoken by default.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.