Coder Social home page Coder Social logo

netflix / conductor Goto Github PK

View Code? Open in Web Editor NEW
12.9K 547.0 2.3K 28.78 MB

Conductor is a microservices orchestration engine.

License: Apache License 2.0

Shell 0.03% Java 73.41% CSS 0.03% HTML 0.05% JavaScript 8.25% Dockerfile 0.07% Groovy 17.88% SCSS 0.09% TypeScript 0.19%
orchestration-engine java spring-boot distributed-systems workflow-management microservice-orchestration workflow-engine reactjs javascript workflow-automation

conductor's People

Contributors

alexmay48 avatar apanicker-nflx avatar aravindanr avatar clari-akhilesh avatar cyzhao avatar dependabot[bot] avatar falu2010-netflix avatar gorzell avatar huangyiminghappy avatar hunterford avatar ismaley avatar james-deee avatar josedab avatar jvemugunta avatar jxu-nflx avatar kishorebanala avatar leandromoreira avatar lordbender avatar manan164 avatar mashurex avatar mdepak avatar mstier-nflx avatar naveenchlsn avatar pctreddy avatar peterlau avatar picaron avatar rickfish avatar s50600822 avatar v1r3n avatar vmg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

conductor's Issues

Worker Polling Intervals Guidance

I know "it depends" but can you share some "real-world-experience" how often worker microservices should poll for new tasks via GET /tasks/poll/{tasktype} at Conductor server?

What would be a guidance? Should it be every second or each 5 seconds, etc.?

Or to say it differently:

  • Are there any performance numbers available how many request a conductor server of a specific hardware can handle (before it gets overwhelmed)?
  • What would be a good monitoring way to pro-activley monitor Conductor Server to see whether it gets overwhelmed?

Many thanks

Admin Configuration

In the swagger docs there is an Admin section which allows GET on /admin/config. What admin configurations are supported, and can these be set/updated when the server is running?

Thanks!

Conductor using Docker

Is there any Docker image available for Conductor (or at least are there plans for it)?
Searching on hub.docker.com didn't show me any results.

Thanks a lot.

build problem

when I build the conductor project according to the Getting-Start shows on windows 7, it failed.
the 5 errors list as follows:
1.Unable to determine the host name on this Windows instance ---the first error
......
2.Publication nebula not found in project :. ---the second error
[buildinfo] Not using buildInfo properties file for this build.
Publication named 'nebula' does not exist for project ':' in task ':artifactoryPublish'.
...... ---the second error
3.826 [main] DEBUG org.elasticsearch.script - [Hazmat] failed to load mustache
java.lang.ClassNotFoundException: com.github.mustachejava.Mustache
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
...... ---the third error
4.2497 [main] WARN org.elasticsearch.bootstrap - JNA not found. native methods will be dis abled.
java.lang.ClassNotFoundException: com.sun.jna.Native
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
...... ---the fourth error
5.12677 [main] INFO org.elasticsearch.node - [Hazmat] started
12677 [main] INFO com.netflix.conductor.demo.EmbeddedElasticSearch - ElasticSearch clust er elasticsearch_test started in local mode on port 9200
Exception in thread "main" java.nio.file.InvalidPathException: Illegal char <:> at index 2 : /E:/conduct_v1/conductor/test-harness/build/resources/test/es_template.json
at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
at java.nio.file.Paths.get(Paths.java:84)
at com.netflix.conductor.demo.EmbeddedElasticSearch.start(EmbeddedElasticSearch.ja va:62)
at com.netflix.conductor.demo.EmbeddedElasticSearch.start(EmbeddedElasticSearch.ja va:37)
at com.netflix.conductor.demo.Main.main(Main.java:42)
12682 [Thread-1] INFO org.elasticsearch.node - [Hazmat] stopping ...
12772 [elasticsearch[Hazmat][clusterService#updateTask][T#1]] INFO org.elasticsearch.gate way - [Hazmat] recovered [0] indices into cluster_state
12772 [elasticsearch[Hazmat][clusterService#updateTask][T#1]] DEBUG org.elasticsearch.clus ter.service - [Hazmat] processing [local-gateway-elected-state]: took 137ms done applying updated cluster_state (version: 2, uuid: OXyaVx67TcqYAZMatWP_Lg)
12787 [Thread-1] INFO org.elasticsearch.node - [Hazmat] stopped
12788 [Thread-1] INFO org.elasticsearch.node - [Hazmat] closing ...
12793 [Thread-1] INFO org.elasticsearch.node - [Hazmat] closed
:conductor-test-harness:server FAILED

FAILURE: Build failed with an exception.

  • What went wrong:
    Execution failed for task ':conductor-test-harness:server'.

Process 'command 'C:\Program Files\Java\jdk1.8.0_102\bin\java.exe'' finished with non-ze ro exit value 1

  • Try:
    Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

BUILD FAILED ---the fatal error which made it quit

As the error shows,My other questions are:
1.Can "conductor" run on window 7?
2.If i use as the "Getting Started guide" shows, Can I install conductor without Dynomite installed and without Tomcat running but only use its in-memory db when i test the simple kitchen sink workflow?

Thank you very much for answering! i am really confused about these but i am really interested in "conductor".

How to run & configure Conductor binaries from Maven Central

After successfully testing the test-harness setup according to the "getting started" guide, I wanted to try the jars from Maven Central.

However, I cannot really figure out

  • which jar is the "main" jar to run in e.g. Tomcat
  • where/how I can configure various settings (e.g. related to Dynomite, connection to ElasticSearch, etc)
  • how to e.g. change the port from the UI and configure its connection to Conductor Backend

I appreciate any help.

Stuck on gradle build/local server

Hi,

I am attempting to install/test the conductor, however when I run the ../gradlew server command building does not advance beyond 96% ("Building 96% > :conductor-test-harness:server"). Any guidance would be appreciated.

Installation Steps

  1. Created Ubuntu 14.04 server
  2. Installed DynomiteDB (http://www.dynomitedb.com/docs/dynomite/v0.5.8/quick-start-ubuntu/)
  3. Installed java + jetty using (https://hostpresto.com/community/tutorials/how-to-install-jetty-8-on-ubuntu-14-04/)
  4. Followed quickstart guide: https://netflix.github.io/conductor/intro/

System Info
~# uname -a

Linux hostname 4.4.0-45-generic #66~14.04.1-Ubuntu SMP Wed Oct 19 15:05:38 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Build Output
~/conductor/test-harness# ../gradlew server
conductor-build-output.txt

Thanks,
~Jesse

Validate schema version

when registering the workflow definition, the schema version should be a valid value (1 or 2).

Unifying Accept/Content-Type headers

I would like to construct all messages to and parse all messages from the conductor in the same fashion. Can we make all Accept and Content-Type headers accept application/json? Some responses are of type text/plain which means I must do extra work (or have a good memory) to create my microservices

Conductor server is not working via jar

Running conductor via jar is not working.
java -jar build/libs/conductor-server-1.7.0-SNAPSHOT-all.jar

image

RemoteTransportException[[Longshot][127.0.0.1:9300][indices:data/read/search[phase/query]]]; nested: SearchParseException[failed to parse search source [{"from":0,"size":100,"query":{"bool":{"must":[{"query_string":{"query":"*"}},{"bool":{"must":{"match_all":{}}}}]}},"fields":[],"sort":[{"startTime":{"order":"desc"}}]}]]; nested: SearchParseException[No mapping found for [startTime] in order to sort on]; }
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onFirstPhaseResult(AbstractSearchAsyncAction.java:206)
at org.elasticsearch.action.search.AbstractSearchAsyncAction$1.onFailure(AbstractSearchAsyncAction.java:152)
at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:46)
at org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:874)
at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:852)
at org.elasticsearch.transport.TransportService$4.onFailure(TransportService.java:389)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: SearchParseException[No mapping found for [startTime] in order to sort on]
at org.elasticsearch.search.sort.SortParseElement.addSortField(SortParseElement.java:213)
at org.elasticsearch.search.sort.SortParseElement.addCompoundSortField(SortParseElement.java:187)
at org.elasticsearch.search.sort.SortParseElement.parse(SortParseElement.java:85)
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:856)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:667)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:633)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:377)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)
at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:378)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
... 3 more

Its working fine if i run server via "../gradlew server".

Please suggest.

UI authentication

Currently, the UI doesn't seem to support any user authentication - so everybody could terminate or pause a running workflow.

Are there any configurations available to provide e.g. user authentication?

Thanks a lot.

Error starting conductor on Windows

Starting server in-memory as described on https://netflix.github.io/conductor/intro/
../gradlew server

Returns such error
18454 [main] INFO org.eclipse.jetty.server.Server - Started @27718ms
Started server on http://localhost:8080/
Creating kitchensink workflow
18876 [main] ERROR com.netflix.conductor.server.ConductorServer - Illegal char <:> at index 2: /D:/programy/conductor2/conductor/server/build/resourcechensink.json
java.nio.file.InvalidPathException: Illegal char <:> at index 2: /D:/programy/conductor2/conductor/server/build/resources/main/kitchensink.json
at sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
at sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
at sun.nio.fs.WindowsPath.parse(WindowsPath.java:94)
at sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:255)
at java.nio.file.Paths.get(Paths.java:84)
at com.netflix.conductor.server.ConductorServer.createKitchenSink(ConductorServer.java:225)
at com.netflix.conductor.server.ConductorServer.start(ConductorServer.java:192)
at com.netflix.conductor.server.Main.main(Main.java:62)
85036 [qtp1319203143-65] INFO org.elasticsearch.plugins - [Sub-Mariner] modules [], plugins [], sites []
107017 [qtp1319203143-58] ERROR com.netflix.conductor.server.resources.GenericExceptionMapper - indices[0] must not be null
java.lang.IllegalArgumentException: indices[0] must not be null
at org.elasticsearch.action.search.SearchRequest.indices(SearchRequest.java:157)
at org.elasticsearch.action.search.SearchRequestBuilder.setIndices(SearchRequestBuilder.java:62)
at org.elasticsearch.client.support.AbstractClient.prepareSearch(AbstractClient.java:587)
at com.netflix.conductor.dao.index.ElasticSearchDAO.search(ElasticSearchDAO.java:196)
at com.netflix.conductor.dao.index.ElasticSearchDAO.searchWorkflows(ElasticSearchDAO.java:148)
at com.netflix.conductor.service.ExecutionService.search(ExecutionService.java:276)
at com.netflix.conductor.server.resources.WorkflowResource.search(WorkflowResource.java:223)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispar.java:185)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:286)
at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:276)
at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:181)
at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91)
at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85)
at com.netflix.conductor.server.JerseyModule$1.doFilter(JerseyModule.java:99)
at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:120)
at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:135)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1174)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1106)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:524)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:745)

Task remaining in "scheduled" state

Hi ,
I am from Capital one. We are investigating on netflix conductor as a reactive solution.
We are trying to call a POST API. Looks like the workflow is configured properly.
But during execution the task always shows as "Scheduled" status. The time goes past the scheduled time but the status remains 'scheduled'. We are using 'In Memory' DB for now and also we have not configured any separate queues. Below are the logs. I am printing the status every 5 seconds.

6057 [qtp59295001-95] INFO org.elasticsearch.plugins - [Mother Earth] modules [], plugins [], sites []
6157 [elasticsearch[Valkin][clusterService#updateTask][T#1]] INFO org.elasticsearch.cluster.metadata - [Valkin] [conductor] create_mapping [workflow]
6269 [qtp59295001-95] INFO com.netflix.dyno.queues.redis.RedisDynoQueue - com.netflix.dyno.queues.redis.RedisDynoQueue is ready to serve _deciderQueue
6316 [elasticsearch[Valkin][clusterService#updateTask][T#1]] INFO org.elasticsearch.cluster.metadata - [Valkin] [conductor] create_mapping [task]
6334 [qtp59295001-95] INFO com.netflix.dyno.queues.redis.RedisDynoQueue - com.netflix.dyno.queues.redis.RedisDynoQueue is ready to serve ais_post
6351 [elasticsearch[Valkin][clusterService#updateTask][T#1]] INFO org.elasticsearch.cluster.metadata - [Valkin] [conductor] update_mapping [workflow]
ais post trigger status: 04c30454-da59-4358-a787-8bf3688282a9

{"createTime":1486569155562,"updateTime":1486569155577,"status":"RUNNING","endTime":0,"workflowId":"04c30454-da59-4358-a787-8bf3688282a9","tasks":[{"workflowInstanceId":"04c30454-da59-4358-a787-8bf3688282a9","taskId":"532f4912-b275-417c-9e98-6c91c0a901c6","callbackAfterSeconds":0,"status":"SCHEDULED","taskType":"ais_post","inputData":{"type":"HTTP","http_request":{"uri":"http://localhost:8080/account-user-web/credit-cards/accounts/Qw0OIy3tHqINOLuYsZYL6zAXWU3adhN63KoFRBRAqSbLtJ8%2BGKNY3naUeAVDJGMs/individuals/9900004","method":"POST","body":"{\"occupation\":\"TESTING\"}","Accept":"application/json","Content-Type":"application/json","headers":{"Api-Key":"RTM"}}},"referenceTaskName":"save_ais_booking","retryCount":0,"seq":1,"pollCount":0,"taskDefName":"ais_post","scheduledTime":1486569155570,"startTime":0,"endTime":0,"updateTime":1486569155570,"startDelayInSeconds":0,"retried":false,"callbackFromWorker":true,"responseTimeoutSeconds":3600,"queueWaitTime":0}],"input":{"ais_post_url":"http://localhost:8080/account-user-web/credit-cards/accounts/Qw0OIy3tHqINOLuYsZYL6zAXWU3adhN63KoFRBRAqSbLtJ8%2BGKNY3naUeAVDJGMs/individuals/9900004","ais_post_body":"{\"occupation\":\"TESTING\"}"},"workflowType":"Booking_Orch","version":1,"schemaVersion":2,"startTime":1486569155562}
{"createTime":1486569155562,"updateTime":1486569155577,"status":"RUNNING","endTime":0,"workflowId":"04c30454-da59-4358-a787-8bf3688282a9","tasks":[{"workflowInstanceId":"04c30454-da59-4358-a787-8bf3688282a9","taskId":"532f4912-b275-417c-9e98-6c91c0a901c6","callbackAfterSeconds":0,"status":"SCHEDULED","taskType":"ais_post","inputData":{"type":"HTTP","http_request":{"uri":"http://localhost:8080/account-user-web/credit-cards/accounts/Qw0OIy3tHqINOLuYsZYL6zAXWU3adhN63KoFRBRAqSbLtJ8%2BGKNY3naUeAVDJGMs/individuals/9900004","method":"POST","body":"{\"occupation\":\"TESTING\"}","Accept":"application/json","Content-Type":"application/json","headers":{"Api-Key":"RTM"}}},"referenceTaskName":"save_ais_booking","retryCount":0,"seq":1,"pollCount":0,"taskDefName":"ais_post","scheduledTime":1486569155570,"startTime":0,"endTime":0,"updateTime":1486569155570,"startDelayInSeconds":0,"retried":false,"callbackFromWorker":true,"responseTimeoutSeconds":3600,"queueWaitTime":0}],"input":{"ais_post_url":"http://localhost:8080/account-user-web/credit-cards/accounts/Qw0OIy3tHqINOLuYsZYL6zAXWU3adhN63KoFRBRAqSbLtJ8%2BGKNY3naUeAVDJGMs/individuals/9900004","ais_post_body":"{\"occupation\":\"TESTING\"}"},"workflowType":"Booking_Orch","version":1,"schemaVersion":2,"startTime":1486569155562}
{"createTime":1486569155562,"updateTime":1486569155577,"status":"RUNNING","endTime":0,"workflowId":"04c30454-da59-4358-a787-8bf3688282a9","tasks":[{"workflowInstanceId":"04c30454-da59-4358-a787-8bf3688282a9","taskId":"532f4912-b275-417c-9e98-6c91c0a901c6","callbackAfterSeconds":0,"status":"SCHEDULED","taskType":"ais_post","inputData":{"type":"HTTP","http_request":{"uri":"http://localhost:8080/account-user-web/credit-cards/accounts/Qw0OIy3tHqINOLuYsZYL6zAXWU3adhN63KoFRBRAqSbLtJ8%2BGKNY3naUeAVDJGMs/individuals/9900004","method":"POST","body":"{\"occupation\":\"TESTING\"}","Accept":"application/json","Content-Type":"application/json","headers":{"Api-Key":"RTM"}}},"referenceTaskName":"save_ais_booking","retryCount":0,"seq":1,"pollCount":0,"taskDefName":"ais_post","scheduledTime":1486569155570,"startTime":0,"endTime":0,"updateTime":1486569155570,"startDelayInSeconds":0,"retried":false,"callbackFromWorker":true,"responseTimeoutSeconds":3600,"queueWaitTime":0}],"input":{"ais_post_url":"http://localhost:8080/account-user-web/credit-cards/accounts/Qw0OIy3tHqINOLuYsZYL6zAXWU3adhN63KoFRBRAqSbLtJ8%2BGKNY3naUeAVDJGMs/individuals/9900004","ais_post_body":"{\"occupation\":\"TESTING\"}"},"workflowType":"Booking_Orch","version":1,"schemaVersion":2,"startTime":1486569155562}
{"createTime":1486569155562,"updateTime":1486569155577,"status":"RUNNING","endTime":0,"workflowId":"04c30454-da59-4358-a787-8bf3688282a9","tasks":[{"workflowInstanceId":"04c30454-da59-4358-a787-8bf3688282a9","taskId":"532f4912-b275-417c-9e98-6c91c0a901c6","callbackAfterSeconds":0,"status":"SCHEDULED","taskType":"ais_post","inputData":{"type":"HTTP","http_request":{"uri":"http://localhost:8080/account-user-web/credit-cards/accounts/Qw0OIy3tHqINOLuYsZYL6zAXWU3adhN63KoFRBRAqSbLtJ8%2BGKNY3naUeAVDJGMs/individuals/9900004","method":"POST","body":"{\"occupation\":\"TESTING\"}","Accept":"application/json","Content-Type":"application/json","headers":{"Api-Key":"RTM"}}},"referenceTaskName":"save_ais_booking","retryCount":0,"seq":1,"pollCount":0,"taskDefName":"ais_post","scheduledTime":1486569155570,"startTime":0,"endTime":0,"updateTime":1486569155570,"startDelayInSeconds":0,"retried":false,"callbackFromWorker":true,"responseTimeoutSeconds":3600,"queueWaitTime":0}],"input":{"ais_post_url":"http://localhost:8080/account-user-web/credit-cards/accounts/Qw0OIy3tHqINOLuYsZYL6zAXWU3adhN63KoFRBRAqSbLtJ8%2BGKNY3naUeAVDJGMs/individuals/9900004","ais_post_body":"{\"occupation\":\"TESTING\"}"},"workflowType":"Booking_Orch","version":1,"schemaVersion":2,"startTime":1486569155562}
{"createTime":1486569155562,"updateTime":1486569155577,"status":"RUNNING","endTime":0,"workflowId":"04c30454-da59-4358-a787-8bf3688282a9","tasks":[{"workflowInstanceId":"04c30454-da59-4358-a787-8bf3688282a9","taskId":"532f4912-b275-417c-9e98-6c91c0a901c6","callbackAfterSeconds":0,"status":"SCHEDULED","taskType":"ais_post","inputData":{"type":"HTTP","http_request":{"uri":"http://localhost:8080/account-user-web/credit-cards/accounts/Qw0OIy3tHqINOLuYsZYL6zAXWU3adhN63KoFRBRAqSbLtJ8%2BGKNY3naUeAVDJGMs/individuals/9900004","method":"POST","body":"{\"occupation\":\"TESTING\"}","Accept":"application/json","Content-Type":"application/json","headers":{"Api-Key":"RTM"}}},"referenceTaskName":"save_ais_booking","retryCount":0,"seq":1,"pollCount":0,"taskDefName":"ais_post","scheduledTime":1486569155570,"startTime":0,"endTime":0,"updateTime":1486569155570,"startDelayInSeconds":0,"retried":false,"callbackFromWorker":true,"responseTimeoutSeconds":3600,"queueWaitTime":0}],"input":{"ais_post_url":"http://localhost:8080/account-user-web/credit-cards/accounts/Qw0OIy3tHqINOLuYsZYL6zAXWU3adhN63KoFRBRAqSbLtJ8%2BGKNY3naUeAVDJGMs/individuals/9900004","ais_post_body":"{\"occupation\":\"TESTING\"}"},"workflowType":"Booking_Orch","version":1,"schemaVersion":2,"startTime":1486569155562}
{"createTime":1486569155562,"updateTime":1486569155577,"status":"RUNNING","endTime":0,"workflowId":"04c30454-da59-4358-a787-8bf3688282a9","tasks":[{"workflowInstanceId":"04c30454-da59-4358-a787-8bf3688282a9","taskId":"532f4912-b275-417c-9e98-6c91c0a901c6","callbackAfterSeconds":0,"status":"SCHEDULED","taskType":"ais_post","inputData":{"type":"HTTP","http_request":{"uri":"http://localhost:8080/account-user-web/credit-cards/accounts/Qw0OIy3tHqINOLuYsZYL6zAXWU3adhN63KoFRBRAqSbLtJ8%2BGKNY3naUeAVDJGMs/individuals/9900004","method":"POST","body":"{\"occupation\":\"TESTING\"}","Accept":"application/json","Content-Type":"application/json","headers":{"Api-Key":"RTM"}}},"referenceTaskName":"save_ais_booking","retryCount":0,"seq":1,"pollCount":0,"taskDefName":"ais_post","scheduledTime":1486569155570,"startTime":0,"endTime":0,"updateTime":1486569155570,"startDelayInSeconds":0,"retried":false,"callbackFromWorker":true,"responseTimeoutSeconds":3600,"queueWaitTime":0}],"input":{"ais_post_url":"http://localhost:8080/account-user-web/credit-cards/accounts/Qw0OIy3tHqINOLuYsZYL6zAXWU3adhN63KoFRBRAqSbLtJ8%2BGKNY3naUeAVDJGMs/individuals/9900004","ais_post_body":"{\"occupation\":\"TESTING\"}"},"workflowType":"Booking_Orch","version":1,"schemaVersion":2,"startTime":1486569155562}

Task Logger Framework

Provide a framework for tasks to log execution context and additional information.
The feature can be used for task workers to log details of task failures (e.g. stack trace) or execution logs.

The data is stored in a persisted store and is made viewable from the UI. The data is purely informational and will not be passed around the tasks.

Securing HTTP REST Endpoints

Currently, it seems to no authentication/token/etc. is needed to access the REST endpoints.

So, basically

  • "every evil" person could create/update/start workflow definitions
  • "every evil" worker program could poll for tasks via /tasks/poll/{tasktype} and prevent workflow execution.
  • etc.

Are there currently any features available to secure (some kind of authentication and authorization) the endpoints?

Thanks a lot.

Comparison with AirFlow

Hi,

After extensive reading it seems AirFlow only lacking feature is back pressure: the ability for a micro service to poll for tasks as opposed to submitting via rest.
Other than that AirFlow seems like a superset of Conductor - am I missing something?

Q: What is the 'best' approach for conditional Task loop in a WorkflowDef?

Scenario:
If task Y matches a certain criteria I want to rerun a series of Workflow Tasks from previous step X.
task_1: Some type of task that doesn't need to be re-run
task_2:Input that influences the proceeding decision
task_3: Run processes on task_2 output that will calculate success/failure status.
decion_task: task_3.output.success == true, move to task_4 else go back to task_2 for retry
task_4 to task_n: All steps that should only be run after the decision task evaluates to success == true

Effectively a do ... while loop ontask_2 and task_3. All of the examples I see for decisions have very concrete flows that don't involve re-running a previous task and then continuing on a single success path when the criteria is met.

In this case task_2 is complete and not failed because it's process actually worked, but was provided bad input. task_3 Evaluates the after-effects of task_2 for validity and passes it's output to the decision task.

Am I better off with a custom task that will influence the decider or executor, or does the decider have the ability to do a loop back via WorkflowDef already?

Callback After Question

@v1r3n
I am still having some problems to understand the feature "callback after" where you already provided some more details here:

When the worker updates the status of the task as IN_PROGRESS, it can optionally provide a time delay after which the task should be made available. This is useful for long running tasks where the worker should be periodically checking the status of the task completion (by another process for example) at a fixed interval (say 1 hour). Callback after value can be set by worker as part of the Task's status update.

Lets say, the "long running worker" polls for the task, updates it to IN_PROGRESS and additionally with a callback of 1 hour.

What does it mean that the task would be made available again?
Is Conductor Server rescheduling the task after 1 hour if the worker is not finished? If yes, what's the difference then compared to the reagular timeout setting in the task definition?

What does it mean that the worker can periodically check the status for task completion?
In my understanding the worker "works as long as it needs" and Conductor Server is waiting for a task update with COMPLETED to check for task completion - why should the worker check for any status of task completion?

Thanks for any additional information, example or use case to help me better understand this feature.

error start with db=dynomite

Many thanks!

  1. I start a dynomite use docker
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                              NAMES
995c1689555a        dynomitedb/dynomite   "/entrypoint.sh dynom"   45 hours ago        Up 45 hours         0.0.0.0:8101-8102->8101-8102/tcp   dynomite

  1. With the server.properties i copy from the document like this
db=dynomite

# Dynomite Cluster details.
# format is host:port:rack separated by semicolon
workflow.dynomite.cluster.hosts=localhost:8102:rack

# Dynomite cluster name
workflow.dynomite.cluster.name=dyno_cluster_name

# Namespace for the keys stored in Dynomite/Redis
workflow.namespace.prefix=conductor

# Namespace prefix for the dyno queues
workflow.namespace.queue.prefix=conductor_queues

# No. of threads allocated to dyno-queues (optional)
queues.dynomite.threads=10

# Non-quorum port used to connect to local redis.  Used by dyno-queues.
# When using redis directly, set this to the same port as redis server
# For Dynomite, this is 22122 by default or the local redis-server port used by Dynomite.
queues.dynomite.nonQuorum.port=22122

# Transport address to elasticsearch
workflow.elasticsearch.url=localhost:9300

# Name of the elasticsearch cluster
workflow.elasticsearch.index.name=conductor

# Additional modules (optional)
conductor.additional.modules=class_extending_com.google.inject.AbstractModule
  1. change the build.gradle add args
build.dependsOn('shadowJar')
task server(type: JavaExec) {
  systemProperty 'workflow.elasticsearch.url', 'localhost:9300'
  systemProperty 'loadSample', 'true'
  args '/Users/jerry/git/conductor/server/src/main/resources/server.properties', '/Users/jerry/git/conductor/server/src/main/resources/log4j.properties'
  main = 'com.netflix.conductor.server.Main'
  classpath = sourceSets.test.runtimeClasspath  
}
  1. start server use ../gradlew server

then throw exception

Inferred project: conductor, version: 1.6.0-SNAPSHOT
Publication nebula not found in project :.
[buildinfo] Not using buildInfo properties file for this build.
Publication named 'shadow' does not exist for project ':conductor-ui' in task ':conductor-ui:artifactoryPublish'.
Publication named 'shadow' does not exist for project ':conductor-contribs' in task ':conductor-contribs:artifactoryPublish'.
Publication named 'shadow' does not exist for project ':conductor-jersey' in task ':conductor-jersey:artifactoryPublish'.
Publication named 'shadow' does not exist for project ':' in task ':artifactoryPublish'.
Publication named 'shadow' does not exist for project ':conductor-redis-persistence' in task ':conductor-redis-persistence:artifactoryPublish'.
Publication named 'shadow' does not exist for project ':conductor-core' in task ':conductor-core:artifactoryPublish'.
Publication named 'shadow' does not exist for project ':conductor-test-harness' in task ':conductor-test-harness:artifactoryPublish'.
Publication named 'shadow' does not exist for project ':conductor-client' in task ':conductor-client:artifactoryPublish'.
Publication named 'shadow' does not exist for project ':conductor-common' in task ':conductor-common:artifactoryPublish'.
:conductor-common:compileJava UP-TO-DATE
:conductor-common:processResources UP-TO-DATE
:conductor-common:classes UP-TO-DATE
:conductor-common:writeManifestProperties UP-TO-DATE
:conductor-common:jar UP-TO-DATE
:conductor-core:compileJava UP-TO-DATE
:conductor-core:processResources UP-TO-DATE
:conductor-core:classes UP-TO-DATE
:conductor-core:writeManifestProperties UP-TO-DATE
:conductor-core:jar UP-TO-DATE
:conductor-contribs:compileJava UP-TO-DATE
:conductor-contribs:processResources UP-TO-DATE
:conductor-contribs:classes UP-TO-DATE
:conductor-contribs:writeManifestProperties UP-TO-DATE
:conductor-contribs:jar UP-TO-DATE
:conductor-jersey:compileJava UP-TO-DATE
:conductor-jersey:processResources UP-TO-DATE
:conductor-jersey:classes UP-TO-DATE
:conductor-jersey:writeManifestProperties UP-TO-DATE
:conductor-jersey:jar UP-TO-DATE
:conductor-redis-persistence:compileJava UP-TO-DATE
:conductor-redis-persistence:processResources UP-TO-DATE
:conductor-redis-persistence:classes UP-TO-DATE
:conductor-redis-persistence:writeManifestProperties UP-TO-DATE
:conductor-redis-persistence:jar UP-TO-DATE
:conductor-server:compileJava UP-TO-DATE
:conductor-server:processResources
:conductor-server:classes
:conductor-server:compileTestJava UP-TO-DATE
:conductor-server:processTestResources UP-TO-DATE
:conductor-server:testClasses UP-TO-DATE
:conductor-server:server
Using /Users/jerry/git/conductor/server/src/main/resources/server.properties
Using log4j config /Users/jerry/git/conductor/server/src/main/resources/log4j.properties
0    [main] WARN  com.netflix.config.sources.URLConfigurationSource  - No URLs will be polled as dynamic configuration sources.
384  [main] INFO  com.netflix.config.sources.URLConfigurationSource  - To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
499  [main] INFO  com.netflix.config.DynamicPropertyFactory  - DynamicPropertyFactory is initialized with configuration sources: com.netflix.config.ConcurrentCompositeConfiguration@4cdbe50f
633  [main] INFO  com.netflix.dyno.contrib.ArchaiusConnectionPoolConfiguration  - Dyno configuration: CompressionStrategy = NONE
737  [main] INFO  com.netflix.dyno.jedis.DynoJedisClient  - Dyno Client runtime properties: ArchaiusConnectionPoolConfiguration{name=, maxConnsPerHost=DynamicProperty: {name=dyno..connection.maxConnsPerHost, current value=3}, maxTimeoutWhenExhausted=DynamicProperty: {name=dyno..connection.maxTimeoutWhenExhausted, current value=800}, maxFailoverCount=DynamicProperty: {name=dyno..connection.maxFailoverCount, current value=3}, connectTimeout=DynamicProperty: {name=dyno..connection.connectTimeout, current value=3000}, socketTimeout=DynamicProperty: {name=dyno..connection.socketTimeout, current value=12000}, localZoneAffinity=DynamicProperty: {name=dyno..connection.localZoneAffinity, current value=true}, resetTimingsFrequency=DynamicProperty: {name=dyno..connection.metrics.resetFrequencySeconds, current value=300}, configPublisherConfig=DynamicProperty: {name=dyno..config.publisher.address, current value=null}, compressionThreshold=DynamicProperty: {name=dyno..config.compressionThreshold, current value=5120}, loadBalanceStrategy=TokenAware, compressionStrategy=NONE, errorRateConfig=null, retryPolicyFactory=com.netflix.dyno.connectionpool.impl.RunOnce$RetryFactory@b684286, failOnStartupIfNoHosts=DynamicProperty: {name=dyno..config.startup.failIfNoHosts, current value=true}, isDualWriteEnabled=DynamicProperty: {name=dyno..dualwrite.enabled, current value=false}, dualWriteClusterName=DynamicProperty: {name=dyno..dualwrite.cluster, current value=null}, dualWritePercentage=DynamicProperty: {name=dyno..dualwrite.percentage, current value=0}}
1631 [main] WARN  com.netflix.dyno.contrib.DynoCPMonitor  - Failed to register metrics with monitor registry
java.lang.IllegalArgumentException: value cannot be empty
        at com.netflix.servo.util.Preconditions.checkArgument(Preconditions.java:48)
        at com.netflix.servo.tag.BasicTag.checkNotEmpty(BasicTag.java:39)
        at com.netflix.servo.tag.BasicTag.<init>(BasicTag.java:33)
        at com.netflix.servo.tag.Tags.newTag(Tags.java:53)
        at com.netflix.servo.tag.SortedTagList$Builder.withTag(SortedTagList.java:88)
        at com.netflix.servo.monitor.Monitors.addMonitorFields(Monitors.java:260)
        at com.netflix.servo.monitor.Monitors.addMonitors(Monitors.java:242)
        at com.netflix.servo.monitor.Monitors.newObjectMonitor(Monitors.java:144)
        at com.netflix.dyno.contrib.DynoCPMonitor.<init>(DynoCPMonitor.java:34)
        at com.netflix.dyno.jedis.DynoJedisClient$Builder.buildDynoJedisClient(DynoJedisClient.java:3356)
        at com.netflix.dyno.jedis.DynoJedisClient$Builder.build(DynoJedisClient.java:3292)
        at com.netflix.conductor.server.ConductorServer.init(ConductorServer.java:136)
        at com.netflix.conductor.server.ConductorServer.<init>(ConductorServer.java:108)
        at com.netflix.conductor.server.Main.main(Main.java:52)
1634 [main] WARN  com.netflix.dyno.jedis.DynoJedisClient  - TOKEN AWARE selected and no token supplier found, using default HttpEndpointBasedTokenMapSupplier()
1687 [main] WARN  com.netflix.dyno.jedis.DynoJedisClient  - DynoJedisClient for app=[] is configured for local rack affinity but cannot determine the local rack! DISABLING rack affinity for this instance. To make the client aware of the local rack either use ConnectionPoolConfigurationImpl.setLocalRack() when constructing the client instance or ensure EC2_AVAILABILTY_ZONE is set as an environment variable, e.g. run with -DEC2_AVAILABILITY_ZONE=us-east-1c
1712 [main] INFO  com.netflix.dyno.jedis.DynoJedisClient  - Starting connection pool for app 
1719 [pool-3-thread-1] INFO  com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl  - Adding host connection pool for host: Host [hostname=localhost, ipAddress=null, port=8102, rack: rack, datacenter: rac, status: Up]
1720 [pool-3-thread-1] INFO  com.netflix.dyno.connectionpool.impl.HostConnectionPoolImpl  - Priming connection pool for host:Host [hostname=localhost, ipAddress=null, port=8102, rack: rack, datacenter: rac, status: Up], with conns:3
1900 [pool-3-thread-1] INFO  com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl  - Successfully primed 3 of 3 to Host [hostname=localhost, ipAddress=null, port=8102, rack: rack, datacenter: rac, status: Up]
3265 [main] WARN  com.netflix.dyno.connectionpool.impl.lb.AbstractTokenMapSupplier  - Could not get json response for token topology [java.net.ConnectException: Connection refused]
3298 [main] INFO  com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl  - registered mbean com.netflix.dyno.connectionpool.impl:type=MonitorConsole
3337 [main] INFO  com.netflix.conductor.server.ConductorServer  - Starting conductor server using dynomite cluster dyno_cluster_name




                     _            _             
  ___ ___  _ __   __| |_   _  ___| |_ ___  _ __ 
 / __/ _ \| '_ \ / _` | | | |/ __| __/ _ \| '__|
| (_| (_) | | | | (_| | |_| | (__| || (_) | |   
 \___\___/|_| |_|\__,_|\__,_|\___|\__\___/|_|   




3784 [main] INFO  com.netflix.conductor.dao.dynomite.queue.DynoQueueDAO  - DynoQueueDAO initialized with prefix conductor_queues.test!
6437 [main] INFO  com.netflix.conductor.contribs.http.HttpTask  - HttpTask initialized...
6453 [main] WARN  com.netflix.conductor.server.ConductorConfig  - class_extending_com.google.inject.AbstractModule
java.lang.ClassNotFoundException: class_extending_com.google.inject.AbstractModule
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:264)
        at com.netflix.conductor.server.ConductorConfig.getAdditionalModules(ConductorConfig.java:122)
        at com.netflix.conductor.server.ServerModule.configure(ServerModule.java:99)
        at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
        at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
        at com.google.inject.spi.Elements.getElements(Elements.java:110)
        at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
        at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
        at com.google.inject.Guice.createInjector(Guice.java:99)
        at com.google.inject.Guice.createInjector(Guice.java:73)
        at com.google.inject.Guice.createInjector(Guice.java:62)
        at com.netflix.conductor.server.ConductorServer.start(ConductorServer.java:167)
        at com.netflix.conductor.server.Main.main(Main.java:62)
7353 [main] INFO  org.eclipse.jetty.util.log  - Logging initialized @10146ms
7617 [main] INFO  org.eclipse.jetty.server.Server  - jetty-9.3.9.v20160517
一月 18, 2017 7:06:00 下午 com.sun.jersey.api.core.PackagesResourceConfig init
信息: Scanning for root resource and provider classes in the packages:
  com.netflix.conductor.server.resources
  io.swagger.jaxrs.json
  io.swagger.jaxrs.listing
一月 18, 2017 7:06:01 下午 com.sun.jersey.api.core.ScanningResourceConfig logClasses
信息: Root resource classes found:
  class com.netflix.conductor.server.resources.MetadataResource
  class io.swagger.jaxrs.listing.ApiListingResource
  class com.netflix.conductor.server.resources.TaskResource
  class com.netflix.conductor.server.resources.AdminResource
  class com.netflix.conductor.server.resources.WorkflowResource
  class io.swagger.jaxrs.listing.AcceptHeaderApiListingResource
一月 18, 2017 7:06:01 下午 com.sun.jersey.api.core.ScanningResourceConfig logClasses
信息: Provider classes found:
  class io.swagger.jaxrs.listing.SwaggerSerializers
  class com.netflix.conductor.server.resources.ApplicationExceptionMapper
  class com.netflix.conductor.server.resources.GenericExceptionMapper
  class com.netflix.conductor.server.resources.WebAppExceptionMapper
一月 18, 2017 7:06:01 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
信息: Registering com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider as a provider class
一月 18, 2017 7:06:01 下午 com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
信息: Initiating Jersey application, version 'Jersey: 1.19.1 03/11/2016 02:42 PM'
一月 18, 2017 7:06:02 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
信息: Binding com.netflix.conductor.server.resources.ApplicationExceptionMapper to GuiceInstantiatedComponentProvider
一月 18, 2017 7:06:02 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
信息: Binding com.netflix.conductor.server.resources.GenericExceptionMapper to GuiceInstantiatedComponentProvider
一月 18, 2017 7:06:02 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
信息: Binding com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider to GuiceManagedComponentProvider with the scope "Singleton"
一月 18, 2017 7:06:03 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
信息: Binding com.netflix.conductor.server.resources.AdminResource to GuiceInstantiatedComponentProvider
一月 18, 2017 7:06:03 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
信息: Binding com.netflix.conductor.server.resources.MetadataResource to GuiceInstantiatedComponentProvider
一月 18, 2017 7:06:03 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
信息: Binding com.netflix.conductor.server.resources.TaskResource to GuiceInstantiatedComponentProvider
一月 18, 2017 7:06:03 下午 com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
信息: Binding com.netflix.conductor.server.resources.WorkflowResource to GuiceInstantiatedComponentProvider
10179 [main] INFO  org.eclipse.jetty.server.handler.ContextHandler  - Started o.e.j.s.ServletContextHandler@1e53135d{/,file:///Users/jerry/git/conductor/server/build/resources/main/swagger-ui/,AVAILABLE}
10297 [main] INFO  org.eclipse.jetty.server.AbstractConnector  - Started ServerConnector@24fb2775{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
10299 [main] INFO  org.eclipse.jetty.server.Server  - Started @13092ms
Started server on http://localhost:8080/
Creating kitchensink workflow
11246 [qtp2018260103-16] ERROR com.netflix.conductor.server.resources.GenericExceptionMapper  - PoolOfflineException: [host=Host [hostname=UNKNOWN, ipAddress=UNKNOWN, port=0, rack: UNKNOWN, datacenter: UNKNOW, status: Down], latency=0(0), attempts=0]host pool is offline and no Racks available for fallback
com.netflix.dyno.connectionpool.exception.PoolOfflineException: PoolOfflineException: [host=Host [hostname=UNKNOWN, ipAddress=UNKNOWN, port=0, rack: UNKNOWN, datacenter: UNKNOW, status: Down], latency=0(0), attempts=0]host pool is offline and no Racks available for fallback
        at com.netflix.dyno.connectionpool.impl.lb.HostSelectionWithFallback.getConnection(HostSelectionWithFallback.java:161)
        at com.netflix.dyno.connectionpool.impl.lb.HostSelectionWithFallback.getConnectionUsingRetryPolicy(HostSelectionWithFallback.java:120)
        at com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl.executeWithFailover(ConnectionPoolImpl.java:292)
        at com.netflix.dyno.jedis.DynoJedisClient.d_hset(DynoJedisClient.java:723)
        at com.netflix.dyno.jedis.DynoJedisClient.hset(DynoJedisClient.java:718)
        at com.netflix.conductor.dao.dynomite.DynoProxy.hset(DynoProxy.java:115)
        at com.netflix.conductor.dao.dynomite.RedisMetadataDAO.insertOrUpdateTaskDef(RedisMetadataDAO.java:72)
        at com.netflix.conductor.dao.dynomite.RedisMetadataDAO.createTaskDef(RedisMetadataDAO.java:57)
        at com.netflix.conductor.service.MetadataService.registerTaskDef(MetadataService.java:54)
        at com.netflix.conductor.server.resources.MetadataResource.registerTaskDef(MetadataResource.java:93)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
        at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$VoidOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:167)
        at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
        at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
        at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
        at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
        at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
        at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
        at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:286)
        at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:276)
        at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:181)
        at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91)
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85)
        at com.netflix.conductor.server.JerseyModule$1.doFilter(JerseyModule.java:99)
        at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
        at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:120)
        at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:135)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1174)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1106)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
        at org.eclipse.jetty.server.Server.handle(Server.java:524)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253)
        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
        at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
        at java.lang.Thread.run(Thread.java:745)
11435 [main] ERROR com.netflix.conductor.server.ConductorServer  - POST http://localhost:8080/api/metadata/taskdefs returned a response status of 500 Internal Server Error
com.sun.jersey.api.client.UniformInterfaceException: POST http://localhost:8080/api/metadata/taskdefs returned a response status of 500 Internal Server Error
        at com.sun.jersey.api.client.WebResource.voidHandle(WebResource.java:709)
        at com.sun.jersey.api.client.WebResource.access$400(WebResource.java:74)
        at com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:555)
        at com.netflix.conductor.server.ConductorServer.createKitchenSink(ConductorServer.java:220)
        at com.netflix.conductor.server.ConductorServer.start(ConductorServer.java:190)
        at com.netflix.conductor.server.Main.main(Main.java:62)
> Could not read standard output of: command '/Library/Java/JavaVirtualMachines/jdk1.8.0_51.jdk/Contents/Home/bin/java'.
java.io.IOException: Stream closed
        at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:170)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:291)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
        at java.io.FilterInputStream.read(FilterInputStream.java:107)
        at org.gradle.process.internal.streams.ExecOutputHandleRunner.run(ExecOutputHandleRunner.java:51)
        at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
        at org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

How to run kitchensink demo

I'm running conductor from the docker image and any workflows I trigger just sit with the first task on SCHEDULED. The log provides nothing useful.

I've run the kitchensink workflow with various versions of
{
"mod": 1,
"oddEven": "odd"
}
/api/workflow/main_workflow?version=1&correlationId=1

Could someone provide and example of how to trigger the example?

info required

Hi,

Would you please suggest some examples or links for below:-

  1. Calling remote application or service on other machine. Please suggest whats required on conductor side and whats required for that service on remote machine?
  2. Want to register conductor under eureka server and other micro services.
  3. Want to change from jetty server to tomcat.

please suggest.

Regards
Ajay

Support JSONPath based expressions

Allow the task inputs to be configured using JSONPath based expressions.
This will allow for much better control over the data flow across different tasks.

Conductor and Eureka

Is there support to register the Conductor backend service with Eureka?
Is there also support for multiple running Conductor backend service instances to build a high available cluster?

It would nice so that workers can discover Conductor instances via Eureka and do load-balancing.

At least in the client code, I could find some hint conc. usage of a Eureka Client here:

client/src/main/java/com/netflix/conductor/client/task/WorkflowTaskCoordinator.java

Thanks a lot.

"TypeError: Cannot read property &#39;forEach&#39;

Hi,

Getting below error when calling workflow having http task. I have defined below in the input parameter as below.

"inputParameters": {
"http_request": {
"uri": "http://localhost:8880/apis/testNames",
"method": "GET",
"accept": "application/json;charset=UTF-8"
}
}

500 - Internal Server Error
"TypeError: Cannot read property 'forEach' of undefined
   at _callee2$ (/Users/ajay/OfficeWork/Office_Code/ECMS_Conductor/conductor/ui/dist/webpack:/src/api/wfe.js:49:5)
   at tryCatch (/Users/ajay/OfficeWork/Office_Code/ECMS_Conductor/conductor/ui/node_modules/regenerator-runtime/runtime.js:64:40)
   at GeneratorFunctionPrototype.invoke [as _invoke] (/Users/ajay/OfficeWork/Office_Code/ECMS_Conductor/conductor/ui/node_modules/regenerator-runtime/runtime.js:355:22)
   at GeneratorFunctionPrototype.prototype.(anonymous function) [as next] (/Users/ajay/OfficeWork/Office_Code/ECMS_Conductor/conductor/ui/node_modules/regenerator-runtime/runtime.js:116:21)
   at tryCatch (/Users/ajay/OfficeWork/Office_Code/ECMS_Conductor/conductor/ui/node_modules/regenerator-runtime/runtime.js:64:40)
   at GeneratorFunctionPrototype.invoke [as _invoke] (/Users/ajay/OfficeWork/Office_Code/ECMS_Conductor/conductor/ui/node_modules/regenerator-runtime/runtime.js:297:24)
   at GeneratorFunctionPrototype.prototype.(anonymous function) [as next] (/Users/ajay/OfficeWork/Office_Code/ECMS_Conductor/conductor/ui/node_modules/regenerator-runtime/runtime.js:116:21)
   at step (/Users/ajay/OfficeWork/Office_Code/ECMS_Conductor/conductor/ui/dist/server.js:8603:192)
   at /Users/ajay/OfficeWork/Office_Code/ECMS_Conductor/conductor/ui/dist/server.js:8603:362
   at run (/Users/ajay/OfficeWork/Office_Code/ECMS_Conductor/conductor/ui/node_modules/core-js/modules/es6.promise.js:87:22)\n"

Please suggest

Separate docker images for UI/backend

It would be useful to create separate docker images for the UI and for the backend. This would allow for custom UI implementation without modifying the server image. The can be tagged something like: conductor:ui and conductor:server. A third tag for the combined image could also be added (and may be the default for conductor:latest)

ArrayIndexOutOfBoundsException while running with dynomite

Hi,

I am trying to start conductor with with dynomite.
Dynomite and redis are running on local machine. I have done the below settings.

For conductor service

db=dynomite
#Dynomite Cluster details.
#format is host:port:rack separated by semicolon
workflow.dynomite.cluster.hosts=localhost:8102:localrack

Dynomite cluster name

workflow.dynomite.cluster.name=test_coductor

#namespace for the keys stored in Dynomite/Redis
workflow.namespace.prefix=testnamespace

#namespace prefix for the dyno queues
workflow.namespace.queue.prefix=ecms_queues

#no. of threads allocated to dyno-queues
queues.dynomite.threads=10

#non-quorum port used to connect to local redis. Used by dyno-queues
queues.dynomite.nonQuorum.port=6369

#Transport address to elasticsearch
workflow.elasticsearch.url=localhost:9300

#Name of the elasticsearch cluster
workflow.elasticsearch.index.name=test_index

While starting the conductor from jar getting below error:-

Its reading the properties file and having above properties.

0 [main] INFO com.netflix.dyno.jedis.DynoJedisClient - Starting connection pool for app conductor
4 [pool-3-thread-1] INFO com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl - Adding host connection pool for host: Host [hostname=localhost, ipAddress=null, port=8102, rack: localrack, datacenter: localrac, status: Up]
4 [pool-3-thread-1] INFO com.netflix.dyno.connectionpool.impl.HostConnectionPoolImpl - Priming connection pool for host:Host [hostname=localhost, ipAddress=null, port=8102, rack: localrack, datacenter: localrac, status: Up], with conns:3
41 [pool-3-thread-1] INFO com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl - Successfully primed 3 of 3 to Host [hostname=localhost, ipAddress=null, port=8102, rack: localrack, datacenter: localrac, status: Up]
Exception in thread "main" java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0
at com.netflix.dyno.jedis.DynoJedisClient$Builder.startConnectionPool(DynoJedisClient.java:3409)
at com.netflix.dyno.jedis.DynoJedisClient$Builder.createConnectionPool(DynoJedisClient.java:3380)
at com.netflix.dyno.jedis.DynoJedisClient$Builder.buildDynoJedisClient(DynoJedisClient.java:3358)
at com.netflix.dyno.jedis.DynoJedisClient$Builder.build(DynoJedisClient.java:3292)
at com.netflix.conductor.server.ConductorServer.init(ConductorServer.java:159)
at com.netflix.conductor.server.ConductorServer.(ConductorServer.java:114)
at com.netflix.conductor.server.Main.main(Main.java:78)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at com.netflix.dyno.connectionpool.impl.lb.HostSelectionWithFallback.calculateReplicationFactor(HostSelectionWithFallback.java:389)
at com.netflix.dyno.connectionpool.impl.lb.HostSelectionWithFallback.initWithHosts(HostSelectionWithFallback.java:346)
at com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl.initSelectionStrategy(ConnectionPoolImpl.java:627)
at com.netflix.dyno.connectionpool.impl.ConnectionPoolImpl.start(ConnectionPoolImpl.java:526)
at com.netflix.dyno.jedis.DynoJedisClient$Builder.startConnectionPool(DynoJedisClient.java:3392)
... 6 more

Please suggest.
Regards
Ajay

not reading eureka client properties file

Hi ,

I am have done changes in conductor to make it work like a eureka client.

I am using conductor-eureka-client.properties having info about the eureka server and other properties.

When i am running server like ../gradlew server , its creating file under /build/resources/main/conductor-eureka-client.properties and its(com.netflix.config.util.ConfigurationUtils) picking the file and working.

I want to run the run via jar file but this time com.netflix.config.util.ConfigurationUtils is not able to read it even i have placed the file parallel to other properties files like (log4j.properties) in jar.

Please suggest.

Regards
Ajay

Examples in documentation for HttpTask's headers parameter does not match code and will throw exception if used

Documentation for HttpTask shows for headers to use:

    "headers": [
      {
        "Accept": "application/json"
      },
      {
        "Content-Type": "application/json"
      }
    ]

Declaration for headers in HttpTask.Input class in 'HttpTask.java' is:
private Map<String, Object> headers = new HashMap<>();

If you use the documented examples, the Jackson conversion to HttpTask.Input on line 102 throws an IllegalArgumentException. Documentation should be (notice the curly brackets):

    "headers": {
        "Accept": "application/json",
        "Content-Type": "application/json"
    }

Run UI via jar or war

Hi,

I am running conductor server via jar but UI is still i am running via gulp.

How to run UI also via jar or war? Please suggest.
Regards
Ajay

Metadata Management - Deleting an existing Workflow

In the swagger UI, I can find a way to delete task-definitions.

However, I cannot find a way to delete an existing workflow.

Is there no way currently to delete an existing workflow?

I tried to construct something like this, but without success:

curl -X DELETE --header 'Accept: application/json' 'http://my_host:my_port/api/metadata/workflow/main_workflow'

Passing Sensitive Information from a Workflow to a Worker

I am having a generic worker program "GetFileFromAnyFTPServer" which I simply pass some parameters (FTP host, username, password, directory, filename) and it downloads the file from this server.

I created a workflow with some inputParameters to be able to pass username/password like this:

    {
      "name": "GetFileFromAnyFTPServer",
      "taskReferenceName": "GetFileFromAnyFTPServer",
      "inputParameters": {
        "ftpHost": "workflow.input.ftpHost",
        "ftpDirectory": "workflow.input.ftpDirectory",
        "ftpFilename": "workflow.input.ftpFilename",
        "ftpUsername": "workflow.input.ftpUsername",
        "ftpPassword": "workflow.input.ftpPassword"
      },
      "type": "SIMPLE",
      "startDelay": 0,
      "callbackFromWorker": true
    },

The problem is, that the UI as well as the REST endpoints (e.g. /workflow/{workflowId} ), now show the username and password in cleartext.

Is there any way to more "securely" pass username / password from a Workflow or Task to a worker?

Are there any best-practices you can share with me?

Thanks a lot.

Task update endpoint returning mismatch of content-type header and actual body

Commit d6d483f changed the task update method to return the task id as a string.

The tasks endpoint of the http interface is returning this value now, but still using a content-type header of application/json. As a result, the Swagger UI and the python client are reporting errors when they try to parse the response as valid json when it is actually just a string.

Running test-workflow from test-harness

I could compile conductor with gradle and also could start "main_workflow" with the Swagger UI.

I also could change the state of task_1 to "in progress" and "complete" but have problems to start dyn-task_2 ... not sure what API calls I need to start task_2 to feed parameter "taskToExecute".

Is there a way to execute the tasks of "main_workflow" task-after-task and see the API call interactions to learn more about its internal workings? There are some test classes within test-harness like "End2EndTests.java", but I couldn't figure out whether these classes will do the job and I couldn't even figure out how to start those test classes.

I appreciate any help.

Conductor as eureka client

Hi,

I have done some code changes to start conductor service as eureka client so that all other spring boot micro services can be used as a work tasks.

I want to know is it required for me to upload the code changes or we would have this feature in coming releases. I don't to do local merge everytime i download the latest code. Please suggest.

Regards
Ajay

An example for WAIT task

Hi, I am currently working on some prototypes using conductor orchestrating workflow. The prototype workflow has a http task to download an image and the next task's input param need to be the downloaded image. Since download takes time, I am planning to use WAIT. Could you provide an example WAIT task? Kitchensink example does not have a reference to WAIT task.
Another option I am trying to use is - Fork and Join. Fork on one task and join on the task. Fork example has a sub workflow within it. Is subworkflow necessary for a Fork task?

Thank you,
Sri

How to save the database of the workflow definition and the tasks definition when restart the docker containers.

Hi,
I run it with docker-compose up, it will be start 4 containers, when it close with Ctrl-c, or docker-compose down, it will be down docker_elasticsearch and docker_dynomite, and restart with docker-compose up, the stores is clean.
When i set es volumes for es data to save node to disk like this

volumes:
      - ./esdata:/usr/share/elasticsearch/data

It does not work for it .

How to do or how to think about it ?

Thanks!

Purpose of Elasticsearch as indexing backend

Just for better understanding...

What exactly is the purpose of the Elasticsearch backend?
Is the content written to Elasticsearch in any way read again and reused by Conductor Server and its state-machine?

Or are all state-machine activities handled soleley via Dynomite? That would mean that Elasticsearch is kind of a "logging archive" for admins?

Thanks for clarification.

Python client?

Where do alternate client implementations fit in the roadmap for this project? I see a python_client branch but I do not see any conversation or activity about the branch for nearly a month, so it is not at all clear what the status of this is.

How do someone know if a workflow is completed, and get the output?

I have two questions

  1. how to save workflows definition and tasks definition and history? now when server restart, it will be clean.
  2. how do workers know if a workflow is completed? poll all workflow status or make a end task to poll end task or others, which one do you recommend usage?

thanks!

Update server-all jar name

The docs, here: https://netflix.github.io/conductor/server/ show that the conductor-server-all jar should be named as such: java -jar conductor-server-all-VERSION.jar

The gradle build creates the following jar:
.//server/build/libs/conductor-server-1.6.0-SNAPSHOT-all.jar

I would like it to be renamed as shown in the docs (all before version)

Starting a dynamic number of tasks based on output of a previous task

Some other question, where I would like to ask you for some advice.

I would like to achieve the following in a workflow:

Task A is the first microservice task in the workflow and finally generates a list of files - e.g.

  • /opt/folder1/file01.jpg
  • /opt/folder2/file02.jpg
  • /opt/folder1/file03.jpg
  • ..
  • /opt/folderX/fileN.jpg

I have another microservice "jpg-extractor" which you pass a filename and it will process this jgp-file and extract some information out of it.
I want to run dozens of instances of "jpg-extractor" to parallelize processing.

What I now want to achieve is that the workflow takes the output of Task A and for each filename puts a Task B in the queue with one of the filenames as input to this task (Task B represents the "jpg-extractor").

My "jpg-extractor" microservices instances could now poll Conductor periodically to get a new "Task B" with a filename as input to process.

The problem is that the number of Task Bs to put in the queue is dependant on the output (i.e. the number of filenames) of Task A.

Is this somehow possible with workflows, outputParameters and dynamic tasks and if yes, can you share some code examples with me?

Many thanks for your advice.

Separated UI not connecting to API

I have a docker-compose that has the UI in a different container than the API:
https://github.com/jcantosz/conductor/blob/docker-split/docker/docker-compose.yaml. I can reach my UI at localhost:3000 and the API at localhost:8080

My UI instance pulls up but does not display any tasks or workflows. I believe I have missed a configuration for the UI.

If I docker exec -it <conductor:ui ID> bash I can ping docker-server, and curl the API endpoints.

My compose pulls up with the following message

conductor-ui_1      | {"name":"Conductor UI","hostname":"6a8fbf1a96a7","pid":42,"level":30,"msg":"Serving static /ui/dist","time":"2017-01-23T16:03:52.343Z","src":{"file":"/ui/dist/webpack:/src/server.js","line":13,"func":"canUseDOM"},"v":0}
conductor-ui_1      | {"name":"Conductor UI","hostname":"6a8fbf1a96a7","pid":42,"level":30,"msg":"Workflow UI listening at http://:::5000","time":"2017-01-23T16:03:52.418Z","src":{"file":"/ui/dist/webpack:/src/server.js","line":22},"v":0}

I haven't gotten a chance to do too much tracing, but the following files may be worth investigating:
ui/gulpfile.babel.js (gulp.task('browserSync'... )
ui/core/HttpClient.js (getURL function -- process.env.WEBSITE_HOSTNAME)
ui/api/sys.js (process.env.WF_SERVER is being used)

Note about the compose: it seems if I do not specify ipv4_address or the ipam information in the compose the UI does not pull up for me. Investigating why this is. I need to confirm that this was not due to other changes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.