Coder Social home page Coder Social logo

Comments (9)

setzer22 avatar setzer22 commented on May 17, 2024 1

Please check out #72. I implemented some code to extract embeddings, but we still need to validate if the results are correct, and how to best expose this to our different levels of API.

from llm.

hlhr202 avatar hlhr202 commented on May 17, 2024

Hi, I would like to add llama.cpp PR here for reference. just noticed they merged the embedding function
https://github.com/ggerganov/llama.cpp/pull/282/files

from llm.

setzer22 avatar setzer22 commented on May 17, 2024

Hi @hlhr202! 👋

Thanks for bringing this to our attention. The code here doesn't look hard at all to port! We will add it to the repo since it makes sense to have a way for people to extract embeddings.

But I'd like to understand (just to satisfy my curiosity). Why are the LLaMA embeddings useful? Is this the same thing as regular word embeddings from any other model? That is, capture the semantics of a word as a vector to allow computing similarity metrics? Do you have a use case for extracting the embeddings that would help us understand the possibilities better? 😄

Not saying this is a requirement for the PR, I just want to learn if there are different use cases for this that I'm not aware of.

from llm.

hlhr202 avatar hlhr202 commented on May 17, 2024

Hi @hlhr202! 👋

Thanks for bringing this to our attention. The code here doesn't look hard at all to port! We will add it to the repo since it makes sense to have a way for people to extract embeddings.

But I'd like to understand (just to satisfy my curiosity). Why are the LLaMA embeddings useful? Is this the same thing as regular word embeddings from any other model? That is, capture the semantics of a word as a vector to allow computing similarity metrics? Do you have a use case for extracting the embeddings that would help us understand the possibilities better? 😄

Not saying this is a requirement for the PR, I just want to learn if there are different use cases for this that I'm not aware of.

yes, computing semantic similarity is quite useful in many cases. it allow us to search sentences in similar semantic by using natural language query.
btw i will help to simply verify the pr and quickly merge into my llama-node.

from llm.

hlhr202 avatar hlhr202 commented on May 17, 2024

Please check out #72. I implemented some code to extract embeddings, but we still need to validate if the results are correct, and how to best expose this to our different levels of API.

@setzer22
thanks for your great work!
I just did a simple test for computing cosine similarity, comparing llama-rs and openai's embedding function. not sure if it is accurate...

dog1: My favourite animal is the dog
dog2: I have just adopted a cute dog
cat1: My favourite animal is the cat

llama-rs model: ggml-alpaca-7b-int4

llama-rs cosine similarity:
dog1 vs dog2  ->  0.6884680986404419
dog1 vs cat1  ->  0.9326339960098267

openai model: text-embedding-ada-002

openai cosine similarity:
dog1 vs dog2  ->  0.8523955345153809
dog1 vs cat1  ->  0.9551568031311035

it looks like everything works well, but the resulting similarity is quite different from openai's text-embedding-ada-002.
probably i will plan to run all the test in llama.cpp for another checking

from llm.

hlhr202 avatar hlhr202 commented on May 17, 2024

It seems llama.cpp have not done embeddings yet. I try to print the embedding vectors, but got size 0.

from llm.

hlhr202 avatar hlhr202 commented on May 17, 2024

@setzer22 sorry I reopened this ticket cuz I have noticed some changes from llama.cpp. And still I have tested a few examples on 7B alpaca but the results not very accurate (not sure if it is caused by small model size). what I v noticed from llama.cpp is that they were not using any end token as representation of sentence embedding, they put all prompt tokens into eval function, but always get a fixed length of vectors.
image

from llm.

hlhr202 avatar hlhr202 commented on May 17, 2024

@setzer22 I think our llama-rs implementation for embeddings may not be correct. what I v noticed from llama.cpp is that they were not using any end token as representation of sentence embedding, they put all prompt tokens into eval function, but always get a fixed length of vectors. image

another tricks i found, but i m not sure if their implementation make sense... I guess they just remove additional vector items and I even dont know if they drop the part correctly, quite weird. I will continue follow the issue in the following few weeks. I m going to have a test on 30B model to see if semantic accuracy is better than 7B alpaca.
image

from llm.

philpax avatar philpax commented on May 17, 2024

This should now be sorted / understandable with #273. Let me know if there's anything else.

from llm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.