Coder Social home page Coder Social logo

schnapsterdog / nuxt-chatgpt Goto Github PK

View Code? Open in Web Editor NEW
152.0 2.0 17.0 952 KB

ChatGPT integration for Nuxt 3

Home Page: https://www.buymeacoffee.com/schnapsterdog

License: MIT License

Vue 10.66% TypeScript 89.34%
chatgpt chatgpt-api chatgpt-bot chatgpt3 nuxt3 typescript vue3

nuxt-chatgpt's Introduction


Nuxt Chatgpt

Logo

ChatGPT integration for Nuxt 3.

npm version npm downloads License


About the module

This user-friendly module boasts of an easy integration process that enables seamless implementation into any Nuxt 3 project. With type-safe integration, you can integrate ChatGPT into your Nuxt 3 project without breaking a sweat. Enjoy easy access to the chat, and chatCompletion methods through the useChatgpt() composable. Additionally, the module guarantees security as requests are routed through a Nitro Server, thus preventing the exposure of your API Key. The module use openai library version 4.0.0 behind the scene.

Features

  • ๐Ÿ’ช ย  Easy implementation into any Nuxt 3 project.
  • ๐Ÿ‘‰ ย  Type-safe integration of Chatgpt into your Nuxt 3 project.
  • ๐Ÿ•น๏ธ ย  Provides a useChatgpt() composable that grants easy access to the chat, and chatCompletion methods.
  • ๐Ÿ”ฅ ย  Ensures security by routing requests through a Nitro Server, preventing the API Key from being exposed.
  • ๐Ÿงฑ ย  It is lightweight and performs well.

Getting Started

  1. Add nuxt-chatgpt dependency to your project
  • npm
    npm install --save-dev nuxt-chatgpt
  • pnpm
    pnpm add -D nuxt-chatgpt
  • yarn
    yarn add --dev nuxt-chatgpt
  1. Add nuxt-chatgpt to the modules section of nuxt.config.ts
export default defineNuxtConfig({
  modules: ["nuxt-chatgpt"],

  // entirely optional
  chatgpt: {
    apiKey: 'Your apiKey here goes here'
  },
})

That's it! You can now use Nuxt Chatgpt in your Nuxt app ๐Ÿ”ฅ

Usage & Examples

To access the chat, and chatCompletion methods in the nuxt-chatgpt module, you can use the useChatgpt() composable, which provides easy access to them. The chat, and chatCompletion methods requires three parameters:

Name Type Default Description
message String available only for chat() A string representing the text message that you want to send to the GPT model for processing.
messages Array available only for chatCompletion() An array of objects that contains role and content
model String text-davinci-003 for chat() and gpt-3.5-turbo for chatCompletion() Represent certain model for different types of natural language processing tasks.
options Object { temperature: 0.5, max_tokens: 2048, top_p: 1 frequency_penalty: 0, presence_penalty: 0 } An optional object that specifies any additional options you want to pass to the API request, such as the number of responses to generate, and the maximum length of each response.

Available models:

  • text-davinci-002
  • text-davinci-003
  • gpt-3.5-turbo
  • gpt-3.5-turbo-0301
  • gpt-3.5-turbo-1106
  • gpt-4
  • gpt-4-1106-preview
  • gpt-4-0314
  • gpt-4-0613
  • gpt-4-32k
  • gpt-4-32k-0314
  • gpt-4-32k-0613

You need to join waitlist to use gpt-4 models within chatCompletion method

Simple chat usage

In the following example, the model is unspecified, and the text-davinci-003 model will be used by default.

const { chat } = useChatgpt()

const data = ref('')
const inputData = ref('')

async function sendMessage() {
  try {
    const response = await chat(inputData.value)
    data.value = response
  } catch(error) {
    alert(`Join the waiting list if you want to use GPT-4 models: ${error}`)
  }
}
<template>
  <div>
    <input v-model="inputData">
    <button
      @click="sendMessage"
      v-text="'Send'"
    />
    <div>{{ data }}</div>
  </div>
</template>

Usage of chat with different model

const { chat } = useChatgpt()

const data = ref('')
const inputData = ref('')

async function sendMessage() {
  try {
    const response = await chat(inputData.value, 'gpt-3.5-turbo')
    data.value = response
  } catch(error) {
    alert(`Join the waiting list if you want to use GPT-4 models: ${error}`)
  }
}
<template>
  <div>
    <input v-model="inputData">
    <button
      @click="sendMessage"
      v-text="'Send'"
    />
    <div>{{ data }}</div>
  </div>
</template>

Simple chatCompletion usage

In the following example, the model is unspecified, and the gpt-3.5-turbo model will be used by default.

const { chatCompletion } = useChatgpt()

const chatTree = ref([])
const inputData = ref('')

async function sendMessage() {
  try {
    const message = {
      role: 'user',
      content: `${inputData.value}`,
    }

    chatTree.value.push(message)

    const response = await chatCompletion(chatTree.value)
    
    const responseMessage = {
      role: response[0].message.role,
      content: response[0].message.content
    }
    
    chatTree.value.push(responseMessage)
  } catch(error) {
    alert(`Join the waiting list if you want to use GPT-4 models: ${error}`)
  }
}
<template>
  <div>
    <input v-model="inputData">
    <button
      @click="sendMessage"
      v-text="'Send'"
    />
    <div>
      <div
        v-for="chat in chatTree"
        :key="chat"
      >
        <strong>{{ chat.role }} :</strong>
        <div>{{ chat.content }} </div>
      </div>
    </div>
  </div>
</template>

Usage of chatCompletion with different model

const { chatCompletion } = useChatgpt()

const chatTree = ref([])
const inputData = ref('')

async function sendMessage() {
  try {
    const message = {
      role: 'user',
      content: `${inputData.value}`,
    }

    chatTree.value.push(message)

    const response = await chatCompletion(chatTree.value, 'gpt-3.5-turbo-0301')
    
    const responseMessage = {
      role: response[0].message.role,
      content: response[0].message.content
    }
    
    chatTree.value.push(responseMessage)
  } catch(error) {
    alert(`Join the waiting list if you want to use GPT-4 models: ${error}`)
  }
}
<template>
  <div>
    <input v-model="inputData">
    <button
      @click="sendMessage"
      v-text="'Send'"
    />
    <div>
      <div
        v-for="chat in chatTree"
        :key="chat"
      >
        <strong>{{ chat.role }} :</strong>
        <div>{{ chat.content }} </div>
      </div>
    </div>
  </div>
</template>

chat vs chatCompletion

The chat method allows the user to send a prompt to the OpenAI API and receive a response. You can use this endpoint to build conversational interfaces that can interact with users in a natural way. For example, you could use the chat method to build a chatbot that can answer customer service questions or provide information about a product or service.

The chatCompletion method is similar to the chat method, but it provides additional functionality for generating longer, more complex responses. Specifically, the chatCompletion method allows you to provide a conversation history as input, which the API can use to generate a response that is consistent with the context of the conversation. This makes it possible to build chatbots that can engage in longer, more natural conversations with users.

Module Options

Name Type Default Description
apiKey String xxxxxx Your apiKey here goes here
isEnabled Boolean true Enable or disable the module. True by default.

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE.txt for more information.

Contact

Oliver Trajceski - LinkedIn - [email protected]

Project Link: https://vuemadness.com/vuehub/nuxt-chatgpt/

Development

# Install dependencies
npm install

# Generate type stubs
npm run dev:prepare

# Develop with the playground
npm run dev

# Build the playground
npm run dev:build

# Run ESLint
npm run lint

# Run Vitest
npm run test
npm run test:watch

# Release new version
npm run release

Acknowledgments

Use this space to list resources you find helpful and would like to give credit to. I've included a few of my favorites to kick things off!

nuxt-chatgpt's People

Contributors

abdelh2o avatar danielroe avatar schnapsterdog avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

nuxt-chatgpt's Issues

Chatcompletion array of messages

Hi,
your project is awesome and it works great with almost no effort :)

Nevertheless I would suggest to change the interface of the chatCompletion function:

const chatCompletion = async (message: IMessage, model?: IModel, options?: IOptions)

to

const chatCompletion = async (messages: IMessages, model?: IModel, options?: IOptions)

or something similar :)

According to openAI API specification, chatCompletions can be configured with a set of messages:

const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}],
});

This change would allow to implement easily use cases such as the "Socratic Tutor" https://platform.openai.com/examples/default-socratic-tutor.

All the best,
Mauro

Prompt not properly parsed in the endpoint

Expected Behavior

  • A response related to the prompt the user provides.

Actual Behavior

  • A totally random response.

Steps to Reproduce the Problem

Basically use the library by making a simple request according to their example.

Specifications

  • Version: 0.1.8
  • Platform: MacOS
  • Subsystem: Unix

Feedback

This looks amazing! A couple of points to mention, in case you find it helpful:

  1. You do not need to stringify the body of $fetch requests, or set content-type. This will be done automatically.

  2. Because you set the public key of runtimeConfig, the user's API key is exposed to the client.

    โš ๏ธ This should probably be fixed as a matter of urgency.

    https://github.com/SchnapsterDog/nuxt-chatgpt/blob/master/src/module.ts#L50C5-L52. (You just need to update that same line to remove public.)

Exposed API Routes

While routing all of the requests through the nitro server does a great job of keeping our API keys private, it also has a potentially unintended effect for some users.

If your intended Chat GPT user interface component is behind authentication (because who wants to expose unlimited use of your api key to the public?) the routes that are exposed with this module are always public.

I could be missing some intended configuration for this use-case, but as far as I can tell the routes /api/chat and /api/chat-completion are always exposed to the internet. Again, this could be intended functionality, I just wanted to make sure that this was not a bug. For my particular uses, I will have to remove this module for this reason.

Thank you for the great work and contribution nonetheless!

Why the apiKey not stored in runtimeConfig object?

The docs state:

// entirely optional
  chatgpt: {
    apiKey: 'Your apiKey here goes here'
  },

Wouldn't you want to reference the key within the runtimeConfig property like so?

	runtimeConfig: {
		OPENAI_API_KEY: "" //<---- would be copied form an `.env` file
	},

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.