zapr-oss / druidry Goto Github PK
View Code? Open in Web Editor NEWJava based Druid Query Generator library
License: Apache License 2.0
Java based Druid Query Generator library
License: Apache License 2.0
when i import fastjson in pom, the response code is 500, remove is ok.
code:
public static void main(String[] args) {
DateTime startTime = new DateTime(2016, 1, 1, 0,
0, 0, DateTimeZone.UTC);
DateTime endTime = new DateTime(2017, 1, 2, 0,
0, 0, DateTimeZone.UTC);
Interval interval = new Interval(startTime, endTime);
PagingSpec pagingSpec = new PagingSpec(5, new HashMap<>());
Granularity granularity = new SimpleGranularity(PredefinedGranularity.ALL);
DruidSelectQuery query = DruidSelectQuery.builder()
.dataSource("wikipedia")
.descending(false)
.granularity(granularity)
.intervals(Collections.singletonList(interval))
.pagingSpec(pagingSpec)
.build();
try {
DruidConfiguration config = DruidConfiguration
.builder()
.host("localhost")
.port(8082)
.endpoint("druid/v2/")
.build();
DruidClient client = new DruidJerseyClient(config);
client.connect();
String query1 = client.query(query);
System.out.println(query1);
} catch (Exception e) {
e.printStackTrace();
}
}
exception:
in.zapr.druid.druidry.client.exception.QueryException: null
at in.zapr.druid.druidry.client.DruidJerseyClient.handleInternalServerResponse(DruidJerseyClient.java:145)
at in.zapr.druid.druidry.client.DruidJerseyClient.query(DruidJerseyClient.java:108)
at com.aaa.demo.springboot.DemoApplicationTests.main(DemoApplicationTests.java:50)
in.zapr.druid.druidry.client.exception.QueryException
at in.zapr.druid.druidry.client.DruidJerseyClient.handleInternalServerResponse(DruidJerseyClient.java:145)
at in.zapr.druid.druidry.client.DruidJerseyClient.query(DruidJerseyClient.java:108)
at com.aaa.demo.springboot.DemoApplicationTests.main(DemoApplicationTests.java:50)
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.58</version>
</dependency>
How should I set the timezone of granularity to Asia/Hong_Kong? And where can I find the API file? Thanks!
The client needs to implement AutoCloseable interface.
In an actual run, can't see much use cases, but in test cases, resource management can come in handy.
Through the 'case when' statement of /console/druid/ page, json is obtained:
{ "queryType": "groupBy", "dataSource": { "type": "table", "name": "dw_action_sec_data_test" }, "intervals": { "type": "intervals", "intervals": [ "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z" ] }, "virtualColumns": [], "filter": null, "granularity": { "type": "all" }, "dimensions": [ { "type": "default", "dimension": "actionId", "outputName": "d0", "outputType": "LONG" }, { "type": "default", "dimension": "applicationId", "outputName": "d1", "outputType": "LONG" } ], "aggregations": [ { "type": "count", "name": "a0" }, { "type": "longSum", "name": "a1", "fieldName": "reqCount", "expression": null }, { "type": "longSum", "name": "a2", "fieldName": "errorCount", "expression": null } ], "postAggregations": [ { "type": "expression", "name": "p0", "expression": "case_searched(((\"a1\" - \"a2\") > 1),1,11110)", "ordering": null } ], "having": null, "limitSpec": { "type": "default", "columns": [], "limit": 100 }, "context": { "sqlOuterLimit": 100, "sqlQueryId": "d45d2637-1a15-43aa-9e6f-fe86a7b281f7" }, "descending": false }
It is found that the type of postAggregations is' expression ', but no corresponding type has been found in druidry. Is this type not supported?Or did I not find the right class?
I am trying to use select query with pagination, but it not working. After connecting to druid using druidry, query is passing DruidClient but response is coming null. I checked same query in json format using curl command, that was working. Please reply as soon as possible.
Copyright 2018-present Red Brick Lane Marketing Solutions Pvt. Ltd.
I wanted to bring this up if it's something you'd consider? In projects already adopted Java 8's java.time package, it's kind of frustrating to add back joda package and try to convert between objects.
This README document seems out of date.
A not abstract class which extends DimensionSpec is required to use this feature. Currently the client has to create a new class to make an object
While building in maven, we should specify encoding so that it doesn't pick up encoding of machine where it is being built
Hi,
when executing a query with this class on aws lambda I am getting :
MessageBodyWriter not found for media type=application/json, type=class in.zapr.druid.druidry.query.aggregation.DruidGroupByQuery, genericType=class in.zapr.druid.druidry.query.aggregation.DruidGroupByQuery.
15:41:38
15:41:38.581 [main] ERROR in.zapr.druid.druidry.client.DruidJerseyClient - Exception while querying {}
seems that the solution is to add a constructor with no args
While writing a SearchFilter , it expects a SearchQuerySpec where type and value are mandatory. Currently there is no value attribute and no way to create the object
https://druid.apache.org/docs/latest/querying/searchqueryspec.html
https://druid.apache.org/docs/latest/querying/datasource.html
Query Data source type can be used for the nested group by. Does Druid support that?
DruidConfiguration config = DruidConfiguration .builder() .host("broker.druid.data.srv/druid/v2") .build();
The url generated by the above code is like this “http://broker.druid.data.srv/druid/v2/:8082/”. The post request constructed with this url will report 404, but I expect it to be like this “http://broker.druid.data.srv/druid/v2/”. I don't need port and endpoint. Can I set the url directly?
Current version is not compatible with Java 11 because of outdated Lombok library. I think that to make it usable with Java 11 Lombok must be updated from version 1.16.14
to latest version 1.18.10
. Could you apply this change?
any support for pagingspec?
There are builders in aggregation query like DruidGroupByQuery
, DruidTopNQuery
, but there is no builder in DruidSearchQuery
.
TimeFormatExtractionFunction use SimpleDateFormat as format. But the petterns used in this class are different from Joda DateTimeFormat. DateTimeFormat has 'e' pattern for day of week number when SimpleDateFormat use pattern 'u'. SimpleDateFormat not supported pettern 'e'.
When I Checked,I got a error : in.zapr.druid.druidry.client.exception.QueryException: javax.ws.rs.ProcessingException: java.net.UnknownHostException: http
When I use druidry in spark streaming (version: 2.3.2) and I got the following error:
19/03/22 15:27:00 ERROR DruidJerseyClient: Exception while querying {}
in.zapr.druid.druidry.client.exception.QueryException
at in.zapr.druid.druidry.client.DruidJerseyClient.handleInternalServerResponse(DruidJerseyClient.java:128)
at in.zapr.druid.druidry.client.DruidJerseyClient.query(DruidJerseyClient.java:91)
Any help for this?
Already partial support is already merged in a separate branch extension-histogram
Hi:
I use druidry like this:
DruidTimeSeriesQuery query = DruidTimeSeriesQuery.builder()
.dataSource("druid_test_2")
.granularity(granularity)
.intervals(Collections.singletonList(interval))
.descending(true)
.filter(filter)
.aggregators(Collections.singletonList(aggregator1))
.intervals(Collections.singletonList(interval))
.build();
ObjectMapper mapper = new ObjectMapper();
String requiredJson = mapper.writeValueAsString(query);
DruidConfiguration config = DruidConfiguration
.builder()
.protocol(DruidQueryProtocol.HTTP)
.host("my druid host")
.port(8082)
.endpoint("druid/v2/")
.concurrentConnectionsRequired(5)
.build();
DruidClient client = new DruidJerseyClient(config);
client.connect();
List<DruidResponse> responses = client.query(query,DruidResponse.class);
and I get the follow error:
{"error":"Unknown exception","errorMessage":"Could not resolve type id 'TIMESERIES' into a subtype of [simple type, class org.apache.druid.query.Query]: known type ids = [Query, dataSourceMetadata, groupBy, scan, search, segmentMetadata, select, timeBoundary, timeseries, topN]\n at [Source: HttpInputOverHTTP@167674ce[c=1242,q=0,[0]=null,s=STREAM]; line: 1, column: 1217]","errorClass":"com.fasterxml.jackson.databind.JsonMappingException","host":null}
Hey!
Firstly, thanks for open sourcing this. Found this really helpful!
Is there any way you could add Unit Tests for https://github.com/zapr-oss/druidry/blob/master/src/main/java/in/zapr/druid/druidry/client/DruidJerseyClient.java#L107
I am trying to figure out how to distinguish query results between select, timeseries and groupBy. I want the response to be as a List and not a String.
hi:
i want to query druid like this sql: select * from table_a where name is null. i use EXPLAIN PLAN FOR in dsql, and it show me filter with : {"type":"selector","dimension":"name","value":null,"extractionFn":null}, but i find error when use: SelectorFilter("name", null) , and SelectorFilter("name") with private access error.how should i query this?
The resultAsArray context key for GroupBy queries is not supported. It is a boolean.
See the bottom of https://druid.apache.org/docs/latest/querying/groupbyquery.html
I have an api which does something like this :
runDruidQuery(DruidQuery query)
{
DruidConfiguration config = DruidConfiguration.builder().host(appConfig.getDruidHost())
.port(appConfig.getDruidPort()).endpoint(appConfig.getDruidEndpoint()).build();
client = new DruidJerseyClient(config);
client .connect();
client.query(query);
client.close();
}
This api works fine for the first time. But on second call I get :
in.zapr.druid.druidry.client.exception.QueryException: javax.ws.rs.ProcessingException: java.lang.IllegalStateException: Connection pool shut down
I tried removing close() call. On doing this, second call works, but I get the exception randomly after some time.
Do I need to configure some timeout configuration anywhere?
The (ObjectMapper's) writeValueAsString method handles Enum type properly, as seen in the console. However ,when it goes to the query method defined in DruidJerseyClient , the Entity.entity method seems to resolve the Enum type(QueryType) inproperly . From the server log, it seems that the query couldn't be parsed. My druidry version is 2.3.
@Override public <T> List<T> query(DruidQuery druidQuery, Class<T> className) throws QueryException { try (Response response = this.queryWebTarget .request(MediaType.APPLICATION_JSON) .post(Entity.entity(druidQuery, MediaType.APPLICATION_JSON))) { ...
The server log shows "Exception occurred on request [unparsable query]
com.fasterxml.jackson.databind.JsonMappingException: Could not resolve type id 'TIMESERIES' into a subtype of [simple type, class io.druid.query.Query]"
Thanks for any comments.
when i add maven fast json in my class path ,it cant work.
Period Granularity should extend Druid Granularity
after query druid got http status code 411. after set the property blow to ClientConfig
get resloved.
this.jerseyConfig.property(ClientProperties.REQUEST_ENTITY_PROCESSING, RequestEntityProcessing.BUFFERED);
Hi!
I have added an authentication step in my Druid environment. How can I continue using this library?
Currently, Druidry classes like DruidFilter, DruidDimensions, Granularity all are not testable. It makes it difficult to test the code created using these classes. Please add Equals and hashcode method to make them testable.
I'm working on a legacy project that uses an old version of jersey (com.sun.jersey 1.19.1), which is not compatible with Druidry (using org.glassfish.jersey 2.*). Upgrading is currently a pain, since lots of other dependencies depend on it.
I looked at the code of Druidry, and it seems like Jersey-client is only used to create the DruidJerseyClient
class. I am wondering if it is simple enough to have a Druidry version that supports the legacy version of jersey. If so that'd be great.
Thanks!
public class IntervalFilter extends DruidFilter {
private static String INTERVAL_DRUID_FILTER_TYPE = "interval";
private String type;
private String dimension;
private List<Interval> intervals;
public IntervalFilter(String dimension, List<Interval> intervals) {
this.dimension = dimension;
this.intervals = intervals;
}
// TODO: support for Extraction Function
}
Is distinct count aggregator supported in druid dry? If not, is there any way to include disticnt count in aggregates?
http://druid.io/docs/latest/development/extensions-contrib/distinctcount.html
@GG-Zapr
In the examples, there is a bug. When creating the DruidTopNQuery
object, need to set the datasource
. But it shouldn't be the String
type. How can fix it?
I need to filter groupBy query results which is like the HAVING clause in SQL.
Druid doc: http://druid.io/docs/latest/querying/having.html
Hi
The interval returns this format:
return String.format(DRUID_INTERVAL_FORMAT, startTime.toDateTimeISO(), endTime.toDateTimeISO());
which results in:
"intervals": ["2018-11-19T00:00:00.000-05:00/2018-11-20T00:00:00.000-05:00"],
However, my query requires the format to be :
"intervals": ["2018-11-01T00:00:00.000Z/2018-11-02T00:00:00.000Z"]
Any idea on how to fix that?
Hi,
I am facing some issue with TopN query. When I generate a druiddry topN query in code, and query druid using it (client.query()), i get the following exception:
javax.ws.rs.ProcessingException: Error reading entity from input stream.
Complete trace :
Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Cannot deserialize instance of java.util.LinkedHashMap
out of START_ARRAY token
at [Source: (org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream); line: 1, column: 51] (through reference chain: java.util.ArrayList[0]->com.uipath.analytics.dataPlatform.store.DruidResponse["result"])
at com.fasterxml.jackson.databind.exc.MismatchedInputException.from(MismatchedInputException.java:63) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.DeserializationContext.reportInputMismatch(DeserializationContext.java:1342) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1138) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1092) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.deser.std.StdDeserializer._deserializeFromEmpty(StdDeserializer.java:599) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:360) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:29) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:136) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:286) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:245) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:27) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.ObjectReader._bind(ObjectReader.java:1574) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:965) ~[jackson-databind-2.9.6.jar:2.9.6]
at org.glassfish.jersey.jackson.internal.jackson.jaxrs.base.ProviderBase.readFrom(ProviderBase.java:838) ~[jersey-media-json-jackson-2.26.jar:na]
at org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$TerminalReaderInterceptor.invokeReadFrom(ReaderInterceptorExecutor.java:257) ~[jersey-common-2.26.jar:na]
at org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$TerminalReaderInterceptor.aroundReadFrom(ReaderInterceptorExecutor.java:236) ~[jersey-common-2.26.jar:na]
at org.glassfish.jersey.message.internal.ReaderInterceptorExecutor.proceed(ReaderInterceptorExecutor.java:156) ~[jersey-common-2.26.jar:na]
at org.glassfish.jersey.message.internal.MessageBodyFactory.readFrom(MessageBodyFactory.java:1091) ~[jersey-common-2.26.jar:na]
at org.glassfish.jersey.message.internal.InboundMessageContext.readEntity(InboundMessageContext.java:874) ~[jersey-common-2.26.jar:na]
... 77 common frames omitted
2018-10-17 12:28:34 INFO DruidReader:66 - Druid execution exception :in.zapr.druid.druidry.client.exception.QueryException: javax.ws.rs.ProcessingException: Error reading entity from input stream.
The query which was generated is -
{
"dataSource": "system-process-insights",
"queryType": "topN",
"intervals": ["2018-10-06T11:30:00.000+05:30/2018-10-06T12:30:00.000+05:30"],
"granularity": "hour",
"aggregations": [{
"type": "longSum",
"name": "a1",
"fieldName": "memUsedVirtual"
}, {
"type": "longSum",
"name": "a2",
"fieldName": "memUsedRam"
}],
"dimension": "processName",
"threshold": 5,
"metric": "a1"
}
If I run this same query directly (using curl), then it gives me proper result. But through druid dry it is failing.
Other queries, groupBy/timeSeries are fine. If I change the type of this query to group by then it works.
Please help.
I am trying to use Druidry (latest version:2.13)
Connection Code
DruidConfiguration config = DruidConfiguration .builder() .protocol(DruidQueryProtocol.HTTP) .host("<host>") .port(8082) .endpoint("druid/v2/") .concurrentConnectionsRequired(5) .build(); ClientConfig clientConfig = new ClientConfig(); clientConfig.register(JacksonFeature.class); DruidJerseyClient client = new DruidJerseyClient(config, clientConfig); client.connect(); client.close();
Error as below
{ "timestamp": "2019-06-03T08:11:07.597+0000", "status": 500, "error": "Internal Server Error", "message": "javax.ws.rs.core.UriBuilder.uri(Ljava/lang/String;)Ljavax/ws/rs/core/UriBuilder;", "trace": "java.lang.AbstractMethodError: javax.ws.rs.core.UriBuilder.uri(Ljava/lang/String;)Ljavax/ws/rs/core/UriBuilder;\n\tat javax.ws.rs.core.UriBuilder.fromUri(UriBuilder.java:120)\n\tat org.glassfish.jersey.client.JerseyWebTarget.<init>(JerseyWebTarget.java:72)\n\tat org.glassfish.jersey.client.JerseyClient.target(JerseyClient.java:344)\n\tat org.glassfish.jersey.client.JerseyClient.target(JerseyClient.java:80)\n\tat in.zapr.druid.druidry.client.DruidJerseyClient.connect(DruidJerseyClient.java:80)\n\tat <>.getAvgListsPerCustomer(ListsController.java:191)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:189)\n\tat org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138)\n\tat org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102)\n\tat org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:892)\n\tat org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:797)\n\tat org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)\n\tat org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1038)\n\tat org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942)\n\tat org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005)\n\tat org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:897)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:634)\n\tat org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:741)\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\n\tat org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\n\tat org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\n\tat org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92)\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\n\tat org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93)\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\n\tat org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200)\n\tat org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)\n\tat org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:200)\n\tat org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)\n\tat org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490)\n\tat org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139)\n\tat org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)\n\tat org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)\n\tat org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343)\n\tat org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408)\n\tat org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)\n\tat org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:834)\n\tat org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1415)\n\tat org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)\n\tat java.lang.Thread.run(Thread.java:748)\n", "path": "<>" }
How can I resolve this?
Thank you.
Hi,
druidry uses Jersey for communication over HTTP. This causes issues if the host app includes dependencies which may have used JAX-RS 1.x. An example is a Spring Boot app running Spring Cloud and therefore libraries such as Netflix's Eureka, Feign, Ribbon. There are open issues in Spring Cloud Netflix project for this.
Would you consider removing JAX-RS and use something else instead?
I found timeseries query can set
"context" : {
"skipEmptyBuckets": "true"
}
to support Zero-filling 。
But I can not found it in druidry
Is anyone using the DruidClient to make requests to druid cluster that's protected by kerberos authentication?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.