Coder Social home page Coder Social logo

elasticsearch-docker's Introduction

elasticsearch-docker

Dockerfile source for elasticsearch docker image.

Upstream

This source repo was originally copied from: https://github.com/docker-library/elasticsearch

Disclaimer

This is not an official Google product.

About

This image contains an installation Elasticsearch

For more information, see the Official Image Marketplace Page.

Pull command (first install gcloud):

gcloud auth configure-docker && docker -- pull marketplace.gcr.io/google/elasticsearch7

Dockerfile for this image can be found here

Table of Contents

Using Kubernetes

Consult Marketplace container documentation for additional information about setting up your Kubernetes environment.

Run Elasticsearch

Start an Elasticsearch instance

Copy the following content to pod.yaml file, and run kubectl create -f pod.yaml.

apiVersion: v1
kind: Pod
metadata:
  name: some-elasticsearch
  labels:
    name: some-elasticsearch
spec:
  containers:
    - image: marketplace.gcr.io/google/elasticsearch7
      name: elasticsearch

Run the following to expose the port. Depending on your cluster setup, this might expose your service to the Internet with an external IP address. For more information, consult Kubernetes documentation.

kubectl expose pod some-elasticsearch --name some-elasticsearch-9200 \
  --type LoadBalancer --port 9200 --protocol TCP

Elasticsearch host requires configured enviroment. On Linux host please run sysctl -w vm.max_map_count=262144. For details please check official documentation.

To retain Elasticsearch data across container restarts, see Use a persistent data volume.

To configure your application, see Configurations.

Use a persistent data volume

To retain Elasticsearch data across container restarts, we should use a persistent volume for /usr/share/elasticsearch/data.

Copy the following content to pod.yaml file, and run kubectl create -f pod.yaml.

apiVersion: v1
kind: Pod
metadata:
  name: some-elasticsearch
  labels:
    name: some-elasticsearch
spec:
  containers:
    - image: marketplace.gcr.io/google/elasticsearch7
      name: elasticsearch
      volumeMounts:
        - name: elasticsearchdata
          mountPath: /usr/share/elasticsearch/data
  volumes:
    - name: elasticsearchdata
      persistentVolumeClaim:
        claimName: elasticsearchdata
---
# Request a persistent volume from the cluster using a Persistent Volume Claim.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: elasticsearchdata
  annotations:
    volume.alpha.kubernetes.io/storage-class: default
spec:
  accessModes: [ReadWriteOnce]
  resources:
    requests:
      storage: 5Gi

Run the following to expose the port. Depending on your cluster setup, this might expose your service to the Internet with an external IP address. For more information, consult Kubernetes documentation.

kubectl expose pod some-elasticsearch --name some-elasticsearch-9200 \
  --type LoadBalancer --port 9200 --protocol TCP

Using Elasticsearch

Connect and start using Elasticsearch

Attach to the container.

kubectl exec -it some-elasticsearch -- bash

The following examples use curl. First we need to install it as it is not installed by default.

apt-get update && apt-get install -y curl

We can get test data into Elasticsearch using a HTTP PUT request. This will populate Elasticsearch with test data.

curl -H "Content-Type: application/json" -XPUT http://localhost:9200/estest/test/1 -d \
'{
   "name" : "Elasticsearch Test",
   "Description": "This is just a test"
 }'

We can try searching for our test data using curl.

curl http://localhost:9200/estest/_search?q=Test

Configurations

Using configuration volume

Assume /path/to/your/elasticsearch.yml is the configuration file on your localhost. We can mount this as volume at /usr/share/elasticsearch/config/elasticsearch.yml on the container for Elasticsearch to read from.

Create the following configmap:

kubectl create configmap elasticsearchconfig \
  --from-file=/path/to/your/elasticsearch.yml

Copy the following content to pod.yaml file, and run kubectl create -f pod.yaml.

apiVersion: v1
kind: Pod
metadata:
  name: some-elasticsearch
  labels:
    name: some-elasticsearch
spec:
  containers:
    - image: marketplace.gcr.io/google/elasticsearch7
      name: elasticsearch
      volumeMounts:
        - name: elasticsearchconfig
          mountPath: /usr/share/elasticsearch/config
  volumes:
    - name: elasticsearchconfig
      configMap:
        name: elasticsearchconfig

Run the following to expose the port. Depending on your cluster setup, this might expose your service to the Internet with an external IP address. For more information, consult Kubernetes documentation.

kubectl expose pod some-elasticsearch --name some-elasticsearch-9200 \
  --type LoadBalancer --port 9200 --protocol TCP

See Elasticsearch documentation on available configuration options.

Also see Volume reference.

Using Docker

Consult Marketplace container documentation for additional information about setting up your Docker environment.

Run Elasticsearch

Start an Elasticsearch instance

Use the following content for the docker-compose.yml file, then run docker-compose up.

version: '2'
services:
  elasticsearch:
    container_name: some-elasticsearch
    image: marketplace.gcr.io/google/elasticsearch7
    ports:
      - '9200:9200'

Or you can use docker run directly:

docker run \
  --name some-elasticsearch \
  -p 9200:9200 \
  -d \
  marketplace.gcr.io/google/elasticsearch7

Elasticsearch host requires configured enviroment. On Linux host please run sysctl -w vm.max_map_count=262144. For details please check official documentation.

To retain Elasticsearch data across container restarts, see Use a persistent data volume.

To configure your application, see Configurations.

Use a persistent data volume

To retain Elasticsearch data across container restarts, we should use a persistent volume for /usr/share/elasticsearch/data.

Assume /path/to/your/elasticsearch/data is a persistent data folder on your host.

Use the following content for the docker-compose.yml file, then run docker-compose up.

version: '2'
services:
  elasticsearch:
    container_name: some-elasticsearch
    image: marketplace.gcr.io/google/elasticsearch7
    ports:
      - '9200:9200'
    volumes:
      - /path/to/your/elasticsearch/data:/usr/share/elasticsearch/data

Or you can use docker run directly:

docker run \
  --name some-elasticsearch \
  -p 9200:9200 \
  -v /path/to/your/elasticsearch/data:/usr/share/elasticsearch/data \
  -d \
  marketplace.gcr.io/google/elasticsearch6

Using Elasticsearch

Connect and start using Elasticsearch

Attach to the container.

docker exec -it some-elasticsearch bash

The following examples use curl. First we need to install it as it is not installed by default.

apt-get update && apt-get install -y curl

We can get test data into Elasticsearch using a HTTP PUT request. This will populate Elasticsearch with test data.

curl -H "Content-Type: application/json" -XPUT http://localhost:9200/estest/test/1 -d \
'{
   "name" : "Elasticsearch Test",
   "Description": "This is just a test"
 }'

We can try searching for our test data using curl.

curl http://localhost:9200/estest/_search?q=Test

Configurations

Using configuration volume

Assume /path/to/your/elasticsearch.yml is the configuration file on your localhost. We can mount this as volume at /usr/share/elasticsearch/config/elasticsearch.yml on the container for Elasticsearch to read from.

Use the following content for the docker-compose.yml file, then run docker-compose up.

version: '2'
services:
  elasticsearch:
    container_name: some-elasticsearch
    image: marketplace.gcr.io/google/elasticsearch7
    ports:
      - '9200:9200'
    volumes:
      - /path/to/your/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

Or you can use docker run directly:

docker run \
  --name some-elasticsearch \
  -p 9200:9200 \
  -v /path/to/your/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
  -d \
  marketplace.gcr.io/google/elasticsearch7

See Elasticsearch documentation on available configuration options.

Also see Volume reference.

Clustering

Creating simple cluster

In the following guide you will learn how to create simple two-node Elasticsearch cluster. This is only an example how to configure and link together containers, not a production ready configuration. For production ready configuration please refer to official documentation.

We will need a master node, that will also serve as a gateway to our cluster. Single agent node will be attached to the master node.

Use the following content for the docker-compose.yml file, then run docker-compose up.

version: '2'
services:
  elasticsearch-master:
    container_name: some-elasticsearch-master
    image: marketplace.gcr.io/google/elasticsearch7
    ports:
      - '9200:9200'
    command:
      - '-Enetwork.host=0.0.0.0'
      - '-Etransport.tcp.port=9300'
      - '-Ehttp.port=9200'
  elasticsearch-agent:
    container_name: some-elasticsearch-agent
    image: marketplace.gcr.io/google/elasticsearch7
    command:
      - '-Enetwork.host=0.0.0.0'
      - '-Etransport.tcp.port=9300'
      - '-Ehttp.port=9200'
      - '-Ediscovery.zen.ping.unicast.hosts=some-elasticsearch-master'
    depends_on:
      - elasticsearch-master

Or you can use docker run directly:

# elasticsearch-master
docker run \
  --name some-elasticsearch-master \
  -p 9200:9200 \
  -d \
  marketplace.gcr.io/google/elasticsearch7 \
  -Enetwork.host=0.0.0.0 \
  -Etransport.tcp.port=9300 \
  -Ehttp.port=9200

# elasticsearch-agent
docker run \
  --name some-elasticsearch-agent \
  --link some-elasticsearch-master \
  -d \
  marketplace.gcr.io/google/elasticsearch7 \
  -Enetwork.host=0.0.0.0 \
  -Etransport.tcp.port=9300 \
  -Ehttp.port=9200 \
  -Ediscovery.zen.ping.unicast.hosts=some-elasticsearch-master

After few seconds, we can check that cluster is running invoking http://localhost:9200/_cluster/health.

References

Ports

These are the ports exposed by the container image.

Port Description
TCP 9114 Prometheus exporter port
TCP 9200 Elasticsearch HTTP port.
TCP 9300 Elasticsearch default communication port.

Volumes

These are the filesystem paths used by the container image.

Path Description
/usr/share/elasticsearch/data Stores Elasticsearch data.
/usr/share/elasticsearch/config/elasticsearch.yml Stores configurations.
/usr/share/elasticsearch/config/log4j2.properties Stores logging configurations.

elasticsearch-docker's People

Contributors

aav66 avatar armandomiani avatar cgn170 avatar ekorolevich avatar eugenekorolevich avatar farajn9 avatar harnas-google avatar huyhg avatar jprzychodzen avatar khajduczenia avatar ovk6 avatar wgrzelak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elasticsearch-docker's Issues

GCP Command Depreciation: Change documentation

Hi Team,

Just thought of saying this... There is a warning while pulling elastic-search image following the document.

gcloud docker -- pull launcher.gcr.io/google/elasticsearch6

WARNING: gcloud docker will not be supported for Docker client versions above 18.03.

As an alternative, use gcloud auth configure-docker to configure docker to
use gcloud as a credential helper, then use docker as you would for non-GCR
registries, e.g. docker pull gcr.io/project-id/my-image. Add
--verbosity=error to silence this warning: gcloud docker --verbosity=error -- pull gcr.io/project-id/my-image.

See: https://cloud.google.com/container-registry/docs/support/deprecation-notices#gcloud-docker

Can you change the document available in elastic-search/6/README.md file?

Stopping right after coming up

I'm using this image with 4Gb Google Cloud Run for a small experiment.
It comes up, but shuts down rightaway.

No obvious errors. Here are the only things that stand out to me:

[2021-12-02T18:17:38,064][INFO ][o.e.n.Node ] [localhost] started
...
[2021-12-02T18:17:42,054][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [localhost] Active license is now [BASIC]; Security is disabled
[2021-12-02T18:17:42.059+0000][1][safepoint   ] Safepoint "ICBufferFull", Time since last: 998624687 ns, Reaching safepoint: 178747 ns, At safepoint: 8201 ns, Total: 186948 ns
[2021-12-02T18:17:43.060+0000][1][safepoint   ] Safepoint "Cleanup", Time since last: 1001202255 ns, Reaching safepoint: 218590 ns, At safepoint: 17433 ns, Total: 236023 ns
...

...
[2021-12-02T18:21:00.199+0000][1][safepoint   ] Safepoint "Cleanup", Time since last: 12007033071 ns, Reaching safepoint: 109913 ns, At safepoint: 80280 ns, Total: 190193 ns
[2021-12-02T18:21:01,687][INFO ][o.e.n.Node               ] [localhost] stopping ...
[2021-12-02T18:21:01,691][INFO ][o.e.x.w.WatcherService   ] [localhost] stopping watch service, reason [shutdown initiated]

I tried passing discovery.type=single-node and http.host=0.0.0.0 as environment variables, but still had the same outcome.

I'm not very familiar with elasticsearch.
Any suggestions on how to run this configuration?

Thanks

Elasticsearch Index not updating or paused after some records insertion

Hej Team, Good day!

We are facing issue while updating a specific Index of the ElasticSearch that we deployed from workloads.

Details:

Elasticsearch by Google Click to Deploy
software: Elasticsearch
version: 6.3.2-20200705-144528

Error:
{
"textPayload": "[2021-04-13T08:40:07,629][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>400, :url=>"http://elasticsearch-elasticsearch-svc.name-space.svc.cluster.local:9200/_bulk\"}\n",
"insertId": "insert_id",
"resource": {
"type": "k8s_container",
"labels": {
"cluster_name": "xxxxx",
"location": "europe-west4",
"container_name": "xxxxxx",
"project_id": "xxxxxxx",
"pod_name": "xxxxxxx",
"namespace_name": "xxxxxxxxx"
}
},
"timestamp": "2021-04-13T08:40:07.629585135Z",
"severity": "INFO",
"labels": {
"k8s-pod/task": "xxxxxxx",
"compute.googleapis.com/resource_name": "xxxxxxx",
"k8s-pod/k8s-app": "logstash",
"k8s-pod/app_kubernetes_io/managed-by": "spinnaker",
"k8s-pod/app_kubernetes_io/name": "xxxxxxxxx",
"k8s-pod/pod-template-hash": "b79d7d586"
},
"logName": "projects/xxxxx/logs/stdout",
"receiveTimestamp": "2021-04-13T08:40:11.010373788Z"
}

after some records, it giving the above error. any help please

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.