Coder Social home page Coder Social logo

acupofjose / elasticstore Goto Github PK

View Code? Open in Web Editor NEW
130.0 130.0 22.0 1.44 MB

A pluggable union between Firebase CloudFirestore + ElasticSearch

License: MIT License

TypeScript 98.54% Dockerfile 0.46% JavaScript 1.00%
elasticsearch elasticsearch-cloudfirestore firestore typescript

elasticstore's Introduction

Metrics Metrics - Additional

elasticstore's People

Contributors

acupajoe avatar acupofjose avatar allcontributors[bot] avatar dependabot[bot] avatar fossabot avatar gubarez avatar nareddyt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

elasticstore's Issues

TypeError: Cannot read property 'request' of undefined at FirestoreCollectionHandler.existsApi

Coming from the ElasticSearch package, not sure why this isn't working.

TypeError: Cannot read property 'request' of undefined at FirestoreCollectionHandler.existsApi (/app/node_modules/@elastic/elasticsearch/api/api/exists.js:62:25) at Queuer. (/app/lib/util/Queuer.js:91:59) at step (/app/lib/util/Queuer.js:33:23) at Object.next (/app/lib/util/Queuer.js:14:53) at /app/lib/util/Queuer.js:8:71 at new Promise () at __awaiter (/app/lib/util/Queuer.js:4:12) at Timeout._onTimeout (/app/lib/util/Queuer.js:85:49) at listOnTimeout (internal/timers.js:554:17) at processTimers (internal/timers.js:497:7)

Could this potentially be my ElasticSearch version?

Adding some examples for using subcollections?

This library looks perfect for what I'm aiming to use in one of my projects, but right now the interactions between subcollections and everything else aren't really described too thoroughly.

The datastore operation timed out, or the data was temporarily unavailable

I'm seeing this error, not sure why, but it should just keep retrying, at least a few times, instead of ending the process.

Error: Error 4: The datastore operation timed out, or the data was temporarily unavailable.
    at QueryWatch.onData (elasticstore\node_modules\@google-cloud\firestore\build\src\watch.js:350:34)
    at PassThrough.<anonymous> (elasticstore\node_modules\@google-cloud\firestore\build\src\watch.js:297:26)
    at PassThrough.emit (events.js:315:20)
    at addChunk (_stream_readable.js:302:12)
    at readableAddChunk (_stream_readable.js:278:9)
    at PassThrough.Readable.push (_stream_readable.js:217:10)
    at PassThrough.Transform.push (_stream_transform.js:152:32)
    at PassThrough.afterTransform (_stream_transform.js:96:10)
    at PassThrough._transform (_stream_passthrough.js:46:3)
    at PassThrough.Transform._read (_stream_transform.js:191:10)

I'm not too familiar with firestore, but it seems to me that you are downloading instead of streaming the collection. And if its too big, it will time out?

Is this the solution? https://cloud.google.com/nodejs/docs/reference/firestore/1.3.x/Query#stream

Here is the issue that they claim fixed this:
googleapis/nodejs-firestore#1040

Clusters

Hi,

I've got some issues with my elastic, I have lost all my data.
I'm trying to find out, where this error is coming from?
After 2 weeks all my data disappears, it's the third time this happens.

Simple question, is elasticstore works with Elastic Cluster?

Logs before crash
2020-08-12T14:24:13.131649420Z {"type": "server", "timestamp": "2020-08-12T14:24:13,130Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[ql2smab371-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:14.305856350Z {"type": "server", "timestamp": "2020-08-12T14:24:14,305Z", "level": "INFO", "component": "o.e.c.m.MetadataDeleteIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[orderproducts/W5SbtYucQnq2sgllRmq8HA] deleting index", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:14.902232512Z {"type": "server", "timestamp": "2020-08-12T14:24:14,901Z", "level": "INFO", "component": "o.e.c.m.MetadataDeleteIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[.apm-custom-link/aArCxTucSNGGLanlk5rjxQ] deleting index", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:15.443445037Z {"type": "server", "timestamp": "2020-08-12T14:24:15,442Z", "level": "INFO", "component": "o.e.c.m.MetadataDeleteIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[.kibana_task_manager_1/GwPn0tIcQeaAAu6WwOzKag] deleting index", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:15.986234335Z {"type": "server", "timestamp": "2020-08-12T14:24:15,985Z", "level": "INFO", "component": "o.e.c.m.MetadataDeleteIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[.apm-agent-configuration/Hp66dGF5QEeJ409ys06OgA] deleting index", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:16.533518541Z {"type": "server", "timestamp": "2020-08-12T14:24:16,532Z", "level": "INFO", "component": "o.e.c.m.MetadataDeleteIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[orders/a5AY4O1-Rj-uHMxEW875_Q] deleting index", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:17.099574191Z {"type": "server", "timestamp": "2020-08-12T14:24:17,099Z", "level": "INFO", "component": "o.e.c.m.MetadataDeleteIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[.kibana-event-log-7.8.1-000001/l4bxH4nfRmKUVVDvfkUVKQ] deleting index", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:17.619359461Z {"type": "server", "timestamp": "2020-08-12T14:24:17,618Z", "level": "INFO", "component": "o.e.c.m.MetadataDeleteIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[.kibana_1/J5lADbITQhWwwbImFfK8ZA] deleting index", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:18.279682540Z {"type": "server", "timestamp": "2020-08-12T14:24:18,278Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[bbb4sqzfa4-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:19.005468812Z {"type": "server", "timestamp": "2020-08-12T14:24:19,004Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[1ignqv614y-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:19.716208495Z {"type": "server", "timestamp": "2020-08-12T14:24:19,715Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[ozdki96t6e-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:20.443041403Z {"type": "server", "timestamp": "2020-08-12T14:24:20,442Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[fvaioboe1r-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:21.164803274Z {"type": "server", "timestamp": "2020-08-12T14:24:21,164Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[845z15nmpo-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:21.563479494Z {"type": "server", "timestamp": "2020-08-12T14:24:21,562Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[.kibana] creating index, cause [auto(bulk api)], templates [], shards [1]/[1], mappings []", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:21.717842012Z {"type": "server", "timestamp": "2020-08-12T14:24:21,717Z", "level": "INFO", "component": "o.e.c.m.MetadataMappingService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[.kibana/JdySITThQj-kZSmLiVEDog] create_mapping [_doc]", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:21.897540497Z {"type": "server", "timestamp": "2020-08-12T14:24:21,897Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[b522c2vx4b-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:22.620409416Z {"type": "server", "timestamp": "2020-08-12T14:24:22,617Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[pv9hap6cpy-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:23.581306926Z {"type": "server", "timestamp": "2020-08-12T14:24:23,580Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[6sr72zmoof-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:24.307193230Z {"type": "server", "timestamp": "2020-08-12T14:24:24,306Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[7o7ebawnh3-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" } 2020-08-12T14:24:25.044807574Z {"type": "server", "timestamp": "2020-08-12T14:24:25,044Z", "level": "INFO", "component": "o.e.c.m.MetadataCreateIndexService", "cluster.name": "docker-cluster", "node.name": "a49cf5331bce", "message": "[5z44f9pe0t-meow] creating index, cause [api], templates [], shards [1]/[1], mappings []", "cluster.uuid": "UmI91QltTRWVFFDl14dkPQ", "node.id": "sv1r5QezSdCqSnzo5zPSqA" }

Cheers

Alain

Adding retry_on_conflict parameter?

Hey, thanks for the cool project, got me up and running very quickly.

I'm using the project to allow search of some fields in our document collection, which is constantly being updated with new documents. Occasionally its possible we modify some of the documents, but that is rare.

I'm running into this issue occasionally:

Error in FS_ADDED handler [doc@1098949404239818800]: [version_conflict_engine_exception] [tweets][1098949404239818800]: version conflict, current version [2] is different than the one provided [1], with { index_uuid="qj6ftGLfSYqNEut4ShXbeg" & shard="2" & index="tweets" }

I've not been able to confirm, but some testers have reported that not all search results that should show up are actually showing up. I believe this is the cause, as those documents simply have failed to add?

From my research, looks like adding a retry_on_conflict value of one or two should handle these issues. Looks like it would go in handleAdded in FirestoreHandler.ts

This look right to you?

Include firestore map fields

I'm trying to use the example on my firestore db, but I'm getting stuck on how to use the transform or include. I have a map field on each document that I only want to include 2 fields on.. it has alot of other information that I don't want to send to elastic search. So if I take your example:

// firestore (in the console)
groups: {
  12341235: {
    title: "Group Name",                  // string
    description: "I'm a group",           // string
    location: "32,-74"                    // geo_point
    createdAt: "9/4/2018 00:00:00 GMT-0"  // date
    groupInfo: { 
        contact: 'joe',
        owner: 'sandy',
        ...a lot more group info that we dont want to include
  }
}

How would I only send groupInfo.contact and groupInfo.owner? I tried using either the include or transform like in your example but neither seem to work, I get a type error [ts] Cannot find name 'doc'. [2304]":

// references (in ./src/references.ts)
{
  ....
  {
    collection: "groups",
    index: "groups",
    type: "groups",
    mappings: {
      location: {
        contact: "string"
        owner: "string"
      }
    },
    transform: (data, parent) => ({
      ...data,
      contact: `${doc.groupInfo.contact}`
      owner: `${doc.groupInfo.owner}`
    }),
    include: ['groupInfo.contact', 'groupInfo.owner',
  },
  ....
}

this.client.indices.exists returns a response object not a boolean

 const exists = await this.client.indices.exists({ index })
 console.log(`${index} ${JSON.stringify(exists, null, 4)}?`)
comments {
    "body": false,
    "statusCode": 404,
    "headers": {
        "content-type": "application/json; charset=UTF-8",
        "content-length": "377"
    },
    "warnings": null,
    "meta": {
        "context": null,
        "request": {
            "params": {
                "method": "HEAD",
                "path": "/comments",
                "body": null,
                "querystring": "",
                "headers": {
                    "user-agent": "elasticsearch-js/7.8.0 (win32 10.0.19041-x64; Node.js v14.4.0)"
                },
                "timeout": 60000
            },
            "options": {
                "warnings": null
            },
            "id": 2
        },
        "name": "elasticsearch-js",
        "connection": {
            "url": "http://localhost:9200/",
            "id": "http://localhost:9200/",
            "headers": {},
            "deadCount": 0,
            "resurrectTimeout": 0,
            "_openRequests": 0,
            "status": "alive",
            "roles": {
                "master": true,
                "data": true,
                "ingest": true,
                "ml": false
            }
        },
        "attempts": 0,
        "aborted": false
    }
}?
...

You check if it exists, seems like needs to do exists.body
or
const { body } = await this.client.indices.exists({ index })

Poor error handling when pushing data to same index, will throw already exists error.

When pushing data to same index (references 2 tables push to same es index), each will check if index exists and try to create it because they are called at the same time. It will start to throw already_exists errors on second worker.

I had to add try/catch error handling and a delay(5000) in the class Worker to give some breathing room for each worker to initialize.

Add include type name parameter

Hi, due to the elastic search update, the include type name is set to false by default, to all the dudes with this problem it is easy solved adding this line in src/util/FirestoreHandler.ts:
image

[feature request] Add and allow option mapping dynamic mapping variable

If you have inconsistent docs, you'll have a lot of problems ingesting your stuff with dynamic mapping enabled.

https://www.elastic.co/guide/en/elasticsearch/reference/7.8/dynamic.html

There are multiple options, one set dynamic: false at the head level, which is kind of a pain to get to, I recommend adding some config variable that can be accessed from the references.ts.

in FirestoreHandler.ts

 await this.client.indices.putMapping({
          index,
          body: {
            dynamic: false, // making dynamic false 
            properties: this.reference.mappings,
          },

Second option would be to do it field level, for that could you add types.ts to allow for that.

{
  "mappings": {
    "dynamic": false, 
    "properties": {
      "user": { 
        "properties": {
          "name": {
            "type": "text"
          },
          "social_networks": { 
            "dynamic": true,
            "properties": {}
          }
        }
      }
    }
  }
}

Update does not work

Hi,

I have added to 2 fields in the mapping files and the database mapping.
When I start elacticstore, it does not include the 2 brand new fields even if I have added them in elasticsearch manually.

Capture d’écran 2020-07-10 à 15 33 49
Capture d’écran 2020-07-10 à 15 26 51
Capture d’écran 2020-07-10 à 15 25 08

Alain

index can't be a function

According to the documentation the index property in the references could be a string or a function. However in the src/util/FirestoreHandler.ts the bind method called in the constructor expects a string. Actually, at this stage, if the index is a function, it can't be executed if it expects to get a snapshot. Perhaps the ES index management logic must be put in a separate method and executed in the handleSnapshot method.
I've added a pull request fixing this issue: #38

Mapping Date and excluding fields inside objects.

How do I map date?

I have this in references.ts

collection: "comments",
    index: "comments",
    mappings: {
      created_at: {
        type: "date" 
      }
    },

But it creates this mapping in ES:

"updated_at":{"properties":{"_nanoseconds":{"type":"long"},"_seconds":{"type":"long"}}}

Which looks like this in the index:

updated_at: {
  _seconds: 1580327708,
  _nanoseconds: 371000000,
},

The data is of type timestamp in Firestore.

2nd question:

If I have a field inside another object:
comment.something.foo and I want to exclude foo. How would I do that? Is that something I use transform for and not the exclude array? Can I give it the full path in the array as string?

How to get the search text in builder?

Hi, I'm new to all this so help me out?

What I want to do is I want to match a field in a document in the search.

{ collection: "organizations", subcollection: "text", index:"text", mappings: { textTitle: { type: "text" } }, builder: (ref) => ref.where('orgId', '==', <INPUT TEXT>), // Transform data as we receive it from firestore transform: (data, parent) => ({ ...data, text:'${doc}' }) }

I want to know how to configure the code so that I can match the orgId with one of the input variables.

Relationship data and foreign keys?

Is there any way I could include data from other tables when I only have the ID?

Example of what I'm expecting to achieve:

import { Reference } from "./types"

// Records should be added here to be indexed / made searchable
const references: Array<Reference> = [
 {
   collection: "comments",
   index: "comments",
   include: ["text", "user_id"],
 },
]
export default references

Instead of indexing user_id I would like it to get the actual user document based on the ID from users collection.

Is this possible?

How can I connect from python?

Hey,
I have an API in python and need to connect to the search collection from there.
Once I run your repo in docker, will I be able to connect to it? Should I start an express server?
Can I connect directly to elastic search and will I be able to see the search collection?
Thanks

TypeError: Cannot read property 'onSnapshot' of undefined

Hi,

I followed your tutorial on Medium (https://medium.com/@acupofjose/wondering-how-to-get-elasticsearch-and-firebases-firestore-to-play-nice-1d84553aa280) and I'm stuck at the last step.

I have an events collection on Firestore and a location subcollection which has a geopoint type field which is called geopoints like this:

Untitled Project

I added this lines in references.ts file without error during the npm start command:

import { Reference } from "./types"

// Records should be added here to be indexed / made searchable
const references: Array<Reference> = [
  {
    collection: "events",
    subcollection: "location",
    index: "event-location",
    mappings: {
        geopoints: {
          // Elasticsearch's 'geo_point' needs to be specified
          type: "geo_point"
      }
    },
    // Transform data as we receive it from firestore
    transform: (data, parent) => ({
      ...data,
     location:`${data.location.geopoints_latitude},${data.location.geopoints._longitude}`
    })
  },
]

export default references

And this is my function in my React project:

import firebase from '@config/firebase';

export const getEvents = () => {
  const result = firebase.firestore().collection('search').add({
    request: {
      index: 'events',
      type: 'events',
      body: {},
    },
    response: null
  })

  result.ref.onSnapshot(doc => {
    if (doc.response !== null) {
      console.log(doc.response)
    }
  })
};

I get this error after calling getEvents function:

TypeError: Cannot read property 'onSnapshot' of undefined

My goal is to reduce the number of requests to Firestore by retrieving all my events stored in Firestore first and then retrieving my events which have a latitude and longitude of X kilometers radius from a latitude and longitude of a user. So, for now I would like to test if it's functional and I don't need to create a query in body.

Do you know where this error can come from? Moreover, I do not understand what the type key is for, can you explain me please?

Thank you in advance :)

Add failed then all update failed as well

Hi I have an issue with your library.

I don't know why but sometimes the create failed.

Error in FS_ADDED handler [doc@VDPUYRBpKg3sJO4nwMgd]: [mapper_parsing_exception] failed to parse
Error in FS_MODIFIED handler [doc@VDPUYRBpKg3sJO4nwMgd]: [document_missing_exception] [orders][VDPUYRBpKg3sJO4nwMgd]: document missing, with { index_uuid="3wbl0DqXSqGYyHsx5rOcbQ" & shard="0" & index="orders" }
Error in FS_MODIFIED handler [doc@VDPUYRBpKg3sJO4nwMgd]: [document_missing_exception] [orders][VDPUYRBpKg3sJO4nwMgd]: document missing, with { index_uuid="3wbl0DqXSqGYyHsx5rOcbQ" & shard="0" & index="orders" }
Error in FS_MODIFIED handler [doc@VDPUYRBpKg3sJO4nwMgd]: [document_missing_exception] [orders][VDPUYRBpKg3sJO4nwMgd]: document missing, with { index_uuid="3wbl0DqXSqGYyHsx5rOcbQ" & shard="0" & index="orders" }

Provide the "why"

Thanks for this project, there's some really interesting stuff in here and it's definitely a solution that I wouldn't have come to. I've read your medium post and I'm still trying to wrap my head around the "why".

If I'm understanding the project correctly, it seems that results are generated on-the-fly and on an as-needed basis. They're generated from ElasticSearch but I can't quite figure out what advantages this setup of putting data in Firestore has over ES. I imagine you could leverage Firebase auth/document control mechanisms but wondering if there's any other advantage to this setup that I'm just not grokking.

Error in `FS_ADDED` handler [doc@h3XhjEJQJ4EZ6KXLSgIE]: illegal_argument_exception

Hey, I found this repository while looking for a simple integration between elasticsearch and firestore. I managed to connect to firestore and run elasticsearch locally, but I'm getting the error above, an illegal_argument_exception. That's all the error message tells me, and I have no idea how to debug this.

For more context, I have a collection of jobs documents hosted on firebase, and I want to sync it with elasticsearch with the goal of doing smarter queries. The code below is all I put inside the references.ts file.

const references: Array = [
{
collection: "jobs",
index: "jobs",
}
]

Please let me know how I can fix this. Thanks!

What is onItemUpserted?

I'm trying to figure out what onItemUpserted is. When is this used?

I'd like to try to recreate a split feature of logstash. Where you take a document, and are able to create multiple documents from it.

Converting Elastic response.hits.hits to a map changes the sort order

By default, elastic search results are sorted based on _score, and can be sorted any number of ways based on the query . in SearchHandler.ts:70, the search results are coerced into a map using the _id as the key. This ends up sorting the map by _id vs. array ordinal. If you are using elastic as a search engine, this will provide inaccurate search results, or you'll have to re-sort the map based on _score.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.