thehive-project / elastic4play Goto Github PK
View Code? Open in Web Editor NEWScala Framework for web applications using Elasticsearch
License: GNU Affero General Public License v3.0
Scala Framework for web applications using Elasticsearch
License: GNU Affero General Public License v3.0
Authentication methods (how client prove its identity in a request: session cookie, basic authentication, api key, ...) should be configurable.
By default all authentication methods are enable but can be disable with the configuration auth.methods.${methodName}=false
.
Available method names are: session, key, basic and init (and maybe pki soon #26).
User roles must be defined by the application, not by elastic4play
In ElasticSearch 2.x aggregations, field hold by nested object was directly accessible, with dot notation ("subobject.field").
In ElasticSearch 5, nested field is accessible with nested aggregation.
Currently, when index is created, default settings are used (number of replicas, number of shards).
Support of search.index.number_of_shards
and search.index.number_of_replicas
configuration item should be added to define index settings when index is created.
Structured queries on numeric field raise an java.io.IOException: can not write type [class scala.math.BigDecimal]
Example of query:
{
"_query": "age",
"_value": 42
}
String queries work ({ "_string": "age:42"}
)
The Stream system allows different components to transmit messages (add, remove, update an entity for example) on a bus.
Currently, the messages sent to the Stream have a local scope: they are not transmitted to other application nodes when it run in cluster.
The aim of this issue is to make Stream cluster ready: the messages are transmitted to all application nodes, as if each component was on the same node.
In configuration auth.type
is deprecated and replaced by auth.provider
.
In ElasticSearch 5.x, fielddata is false for text fields. (cf. ElasticSearch documentation). These fields can't be used for aggregation.
Fielddata must be explicitly enabled.
Make ElasticSearch cluster health available to application.
ElastiSearch doesn't provide version of document by default. Thus, an update of a specific version (using ModifyConfig) doesn't work.
Attribute "user" has been deprecated and replaced by "createdBy".
Database is ready if index exists and if migration is complete. There is a typo in the second test.
This issue extends #9 by adding custom settings to index creation.
In application.conf index can be configured as follow:
search {
index = the_hive
nbreplicas = 1
settings {
mapping.nested_fields.limit = 100
}
}
If search query is invalid, error handler returns an internal error (500). It should return a bad request error (400).
This issue is related to TheHive-Project/TheHive#285
If number of documents requested in search function is greater than a limit (configurable), documents are loaded from ElasticSearch and transmitted to client by pages, using Scroll.
Scroll function in ElasticSearch doesn't accept "from" parameter. This must be managed by application.
from TheHive-Project/TheHive#383
When setting multiple ES hosts in TheHive configuration:
search.host = ["es-server1:9300", "es-server2:9300", "es-server3:9300", "es-server4:9300"]
the generated uri is not correct:
elasticsearch://server1:9300,elasticsearch://es-server2:9300,elasticsearch://es-server3:9300,elasticsearch://es-server4:9300
it should be:
elasticsearch://server1:9300,es-server2:9300,es-server3:9300,es-server4:9300
Some attributes can contain heavy data. This kind of attributes is usually mark as unaudited in order to limit the weight of audit logs.
A parameter must be added to AuxSrv to filter out unaudited attributes.
DBList caches their items to increase performance. When an update is done, cache must be invalidated, otherwise, old items will be returned until the cache expire (10 seconds).
The aim of this task is to, optimize imports, follow code style guide, add return type for public methods, ...
Currently, items in dblist can be listed, added and removed but not updated.
The aim of this issue is to add a method to update a dblist entry.
Error generated by ElasticSearch and by Elastic4play services should be handles globally using play.http.errorHandler.
Add a specific exception for duplicate entity (ElasticSearch conflict error).
5e0a560 Update playframework to 2.5.14
Elastic4play should be able to accept an already uploaded attachment without the need to extract from the database. Currently attachment can be added into entity using FileInputValue, which accept only file stored in local storage. In order to implement this feature a new type of input value must be implemented : AttachmentInputValue.
Custom fields attribute is a JSON object that contain any number of fields. Each field contains a nested object where key is the field type. Supported types are: string
, number
, date
and boolean
.
This structure make ElasticSearch mapping stable (same field name have always the same type).
Each field may also contain the key order
which is used to order them in a list.
Here is an example of valid custom fields:
{
"fieldName": { "number": 42, "order": 2 },
"otherField": { "boolean": true, "order": 1 },
"thirdField": { "date": 1497861727 }
}
Elastic4play accepts field with multiple types (e.g. { "otherField": { "boolean": true, "number": 1 } }
.
getEntity should retrieve entity from index related to its database states, not the latest index.
Some REST method required at least one of several role. The aim of this issue is the ability to provide several roles. The method is executed if the user has at least one of this roles.
Attachment service can save attachment from file but not from memory.
The aim of this issue is to add a method to save attachment from data in memory (array of bytes).
The aim of this issue is to remove modelName
parameter of stream sink and use attribute _type
to determine the document type to create.
The aim of this feature is to add _query
field in selectable aggregations (avg, min, max ,sum and count).
With this feature, it could be interesting to have several aggregations of the same type, with different filter. To prevent name collision, an optional name can be set to any aggregation.
For example:
{
"query": { "_field": "status", "_value": "new" },
"stats": [{
"_agg": "time",
"_fields": ["createdAt"],
"_interval": "1h",
"_name": "documentsOverTime",
"_select": [{
"_agg": "count",
"_name": "LowPriorityCount",
"_query": { "_field": "priority", "_value": "low" }
},
{
"_agg": "count",
"_name": "highPriorityCount",
"_query": { "_field": "priority", "_value": "high" }
}
]
}]
}
The reason why the authentication fails (user doesn't exist, wrong password, ...) must not be expose to the user.
An attribute should provide its own definition:
With this information, application can build forms dynamically.
The result of a get request can easily be cached to increase performance.
Request from TheHive-Project/TheHive#297
Add client certificate authentication.
This issue aims to fix the build configuration file to add support to bintray and update some organisation configuration
Fields (from http request) are checked according to model attributes. This check reports invalid format, update read-only attribute, unknown and missing attributes.
When attribute has an object format (defines sub-attributes) the check is not complete, it doesn't look for missing attributes.
The aim of this issue is to give to application more control on ElasticSearch behavior when concurrent accesses occur on the same document.
In update methods, a new parameter is added to define:
Some methods in Fields are missing. It is not possible to get InputValue, except by using map.
This class must be reviewed.
JsonInputValue is the sole InputValue that support subattribute, so attachments are mandatory in the top level fields.
Attachment can be created from FileInputValue (FIV) or from AttachmentInputValue (AIV). The idea is to permit to create AIV from Json. FIV must not be created from Json because FIV contains a reference to a file on disk and may cause security risk.
Json representation of AIV should be provided by service layer, not from the http request as it references already stored attachment in datastore and must contain coherent data (in particular hashes).
Related to CERT-BDF/TheHive#354.
ISO date format (yyyyMMdd'T'HHmmssZ) is not convenient in many cases. It was originally chosen because it handles time zone but it comes with more problems than benefits.
It is replaced by timestamps with milliseconds. Previous format is still accepted.
The database migration uses reactive streams. This streams should be configurable (buffer size, number of threads, timeouts, debug, ...).
The goal of this issue is to format the Scala code automatically during build process using Scalariform SBT plugins (https://github.com/sbt/sbt-scalariform)
Permit an user to authenticate using API key. Two new methods must be implemented in AuthSrv:
The elastic4s TCPClient you are using to communicate with ElasticSearch has deprecated the TCPClient in favor of the HTTPClient. It will be removed in the next major version of elastic4s which already has a first release candidate.
According to the README they say you can just swap out the import paths from elastic-tcp
to elastic-http
and change code uses of TCPClient
to HTTPClient
. That said, there is a warning about the responses being different from client.execute()
so the change is a little more involved:
Requests are the same for either client, but response classes may vary slightly as the HTTP response classes model the returned JSON whereas the TCP response classes wrap the Java client classes
Once complete and added to TheHive, people can insert an aws authenticating proxy to use AWS Hosted ElasticSearch ending a standoff between TheHive developers and AWS dogma. Apparently support for a signed http transport in elastic4s is in the works.
Ideally a pluggable transport would allow people to pick and choose (tcp, http, http-aws) rather than be dictated to them.
Anyway this isn't about AWS ES but the TCP transport going away in the library you are using. Better to start a discussion about what to do and when than wait until the code atrophies.
I am not able to create users accept the admin account. the application.conf file is with default settings, please guide me where I need to do changes? Need help as soon as possible.
This API should allow a user to check if a document exists on the DBList table, by providing a key/value filter.
Example: check if there is a custom field with a reference
attribute set to the value "businessImpact"
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.