Coder Social home page Coder Social logo

Comments (14)

chuanqisun avatar chuanqisun commented on May 22, 2024 2

I wonder if this feature can benefit from Intl.Segmenter (requires a polyfill for FireFox). Segmenter can take the locale and automatically determine where the word boundaries should be. Also, potentially reducing library size and improving tokenization performance. It works on the server side too.

from orama.

micheleriva avatar micheleriva commented on May 22, 2024 1

@SoTosorrow we could make rules for languages such as Chinese where we operate on tokens differently. But we need examples and documentation to understand how to operate properly, here we might need your help ๐Ÿ™‚

from orama.

micheleriva avatar micheleriva commented on May 22, 2024

Hi @SoTosorrow ,
absolutely, any PR is very appreciated ๐Ÿ™‚

from orama.

SoTosorrow avatar SoTosorrow commented on May 22, 2024

Hi @SoTosorrow , absolutely, any PR is very appreciated ๐Ÿ™‚

thx for your reply.
I just realized that the index of lyra starts with the beginning of a split word,.
For example, lyra can search "lov" to "i love her", but can not search "ove" to "i love her"(with exact).
which means that for languages with consecutive word๏ผˆwith no or fewer split) such as Chinese, Japanese, similar rules cannot be simply applied.
Chinese's sentence always like "ABC,EF" ("iloveher,ofcourse"), that i can not search the sentence by "B"("love") or "C"("her").i can only search it by "A.."("ilove")
It seems that I can't give my PR easily.
hhhhhhh

from orama.

SoTosorrow avatar SoTosorrow commented on May 22, 2024

Hi @SoTosorrow , absolutely, any PR is very appreciated ๐Ÿ™‚

It's not easy to support chinese (or any other language which use consecutive-word with no split) by append a simple regular expression in pure lyra, if i want to retrieval chinese, i need to break down words before "insert" and "search".
Should i add the regular expression and prompt the user that chinese sentences needs to be processed first or give up this method?

from orama.

SoTosorrow avatar SoTosorrow commented on May 22, 2024

@SoTosorrow we could make rules for languages such as Chinese where we operate on tokens differently. But we need examples and documentation to understand how to operate properly, here we might need your help ๐Ÿ™‚

I'd love to help with examples and documentation. I will give the relevant information after sorting it out.
Should i open a discussion for the examples and documentation or continue in this issue?

from orama.

micheleriva avatar micheleriva commented on May 22, 2024

Let's open a discussion for that, will act as future documentation

from orama.

SoTosorrow avatar SoTosorrow commented on May 22, 2024

Let's open a discussion for that, will act as future documentation

copy that! thanks

from orama.

SoTosorrow avatar SoTosorrow commented on May 22, 2024

I wonder if this feature can benefit from Intl.Segmenter (requires a polyfill for FireFox). Segmenter can take the locale and automatically determine where the word boundaries should be. Also, potentially reducing library size and improving tokenization performance. It works on the server side too.

It seemd works, i will do more test, thanks for your guidance๏ผ

from orama.

OultimoCoder avatar OultimoCoder commented on May 22, 2024

@SoTosorrow Did you manage to get Chinese working? If so could you provide an example?

from orama.

group900-3 avatar group900-3 commented on May 22, 2024

Based on the help provided by the comments above, I implemented the Chinese tokenizer using Intl.Segmenter, which may be able to help you.
Intl.Segmenter works great in chrome and cloudflare workers.

// override default english tokenizer
const chineseTokenizer = {
  language: "english",
  normalizationCache: new Map(),
  tokenize: (raw: string) => {
    const segmenter = new Intl.Segmenter("zh", { granularity: "word" });
    const _iterator = segmenter.segment(raw)[Symbol.iterator]();
    return Array.from(_iterator).map((i) => i.segment);
  },
};
const db: Orama<typeof schema> = await create({
  schema,
  components: {
    tokenizer: chineseTokenizer,
  },
});

update:
Although no errors were reported when doing this, most of the time I couldn't search for the results I wanted, and I think further adaptation is needed somewhere. But then I won't be able to do it. At present, I will choose other engines to connect to my project.

from orama.

SoTosorrow avatar SoTosorrow commented on May 22, 2024

Based on the help provided by the comments above, I implemented the Chinese tokenizer using Intl.Segmenter, which may be able to help you.
Intl.Segmenter works great in chrome and cloudflare workers.

// override default english tokenizer
const chineseTokenizer = {
  language: "english",
  normalizationCache: new Map(),
  tokenize: (raw: string) => {
    const segmenter = new Intl.Segmenter("zh", { granularity: "word" });
    const _iterator = segmenter.segment(raw)[Symbol.iterator]();
    return Array.from(_iterator).map((i) => i.segment);
  },
};
const db: Orama<typeof schema> = await create({
  schema,
  components: {
    tokenizer: chineseTokenizer,
  },
});

update:
Although no errors were reported when doing this, most of the time I couldn't search for the results I wanted, and I think further adaptation is needed somewhere. But then I won't be able to do it. At present, I will choose other engines to connect to my project.

I have also tried Intl segmentation based on the comments above, but the result on Chinese is not always good, and there may be some dependency issues.
I have also tried other word segmentation libraries such as "jieba", and some of them have good results, but they will introduce additional third-party packages and need to modify the core function of word segmentation (at that time) to adapt to Chinese word segmentation.
considering the possible impact. so I had stop.

from orama.

group900-3 avatar group900-3 commented on May 22, 2024

@SoTosorrow What search engine did you choose in the end?I'm going to try algolia.

from orama.

SoTosorrow avatar SoTosorrow commented on May 22, 2024

@SoTosorrow What search engine did you choose in the end?I'm going to try algolia.

I didn't use js search services in the end, so I regret that I can't give you more suggestions.

from orama.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.