thehive-project / cortex Goto Github PK
View Code? Open in Web Editor NEWCortex: a Powerful Observable Analysis and Active Response Engine
Home Page: https://thehive-project.org
License: GNU Affero General Public License v3.0
Cortex: a Powerful Observable Analysis and Active Response Engine
Home Page: https://thehive-project.org
License: GNU Affero General Public License v3.0
Bug
Question | Answer |
---|---|
Cortex version | 1.1.3 |
Using cortexutils
library, analyzers include their input on the failure output report, to help users investigate the errors that could occur.
It seems that there is an regression in Cortex that removes that input
attributes from the failure reports, and just keep the success
and errorMessage
properties.
As an example, the failure report should look like
{
"input": {
"tlp": 1,
"dataType": "file",
"filename": "output.swf",
...
"config": {
...
}
},
"errorMessage": "An error occurred during the file scan",
"success": false
}
Bug
Question | Answer |
---|---|
Cortex version / git hash | 1.1.2 |
Package Type | Any |
Cortex doesn't correctly handle the case where an analyzer fails and returns an JSON report including errorMessage
field.
We end up with Cortex returning a incoherent error message
Feature Request
NA
The current version of Cortex (1.0.0) has no persistence. When it is restarted, all jobs are lost hence the associated results are no longer available.
Moreover, if an analyst runs an analyzer through the Web UI or the REST API, they can't retrieve the report from TheHive or any other 3rd party tool that leverages the API. They need to re-run the analysis. This is a serious problem:
Bug
Question | Answer |
---|---|
OS version (server) | Debian 8 |
Cortex version / git hash | 1.1.1 |
Package Type | Debian Package |
After Upgrade Cortex does not come up.
It Looks as anything was missing (maybe in the config?)
Error Messages:
2017-05-19 09:54:56,987 [INFO] from akka.event.slf4j.Slf4jLogger in application-akka.actor.default-dispatcher-2 - Slf4jLogger started
2017-05-19 09:54:58,701 [ERROR] from akka.actor.OneForOneStrategy in application-akka.actor.default-dispatcher-3 - Unable to provision, see the following errors:
1) Error injecting constructor, java.util.NoSuchElementException: None.get
at services.MispSrv.<init>(MispSrv.scala:32)
at services.MispSrv.class(MispSrv.scala:21)
while locating services.MispSrv
for parameter 1 at services.AnalyzerSrv.<init>(AnalyzerSrv.scala:12)
at services.AnalyzerSrv.class(AnalyzerSrv.scala:12)
while locating services.AnalyzerSrv
for parameter 1 at services.JobActor.<init>(JobSrv.scala:109)
while locating services.JobActor
1 error
akka.actor.ActorInitializationException: akka://application/user/JobActor: exception during creation
at akka.actor.ActorInitializationException$.apply(Actor.scala:174)
at akka.actor.ActorCell.create(ActorCell.scala:607)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:461)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:483)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:282)
at akka.dispatch.Mailbox.run(Mailbox.scala:223)
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: com.google.inject.ProvisionException: Unable to provision, see the following errors:
1) Error injecting constructor, java.util.NoSuchElementException: None.get
at services.MispSrv.<init>(MispSrv.scala:32)
at services.MispSrv.class(MispSrv.scala:21)
while locating services.MispSrv
for parameter 1 at services.AnalyzerSrv.<init>(AnalyzerSrv.scala:12)
at services.AnalyzerSrv.class(AnalyzerSrv.scala:12)
while locating services.AnalyzerSrv
for parameter 1 at services.JobActor.<init>(JobSrv.scala:109)
while locating services.JobActor
1 error
at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1025)
at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1051)
at play.api.inject.guice.GuiceInjector.instanceOf(GuiceInjectorBuilder.scala:405)
at play.api.inject.guice.GuiceInjector.instanceOf(GuiceInjectorBuilder.scala:400)
at play.api.libs.concurrent.ActorRefProvider$$anonfun$1.apply(Akka.scala:210)
at play.api.libs.concurrent.ActorRefProvider$$anonfun$1.apply(Akka.scala:210)
at akka.actor.TypedCreatorFunctionConsumer.produce(IndirectActorProducer.scala:87)
at akka.actor.Props.newActor(Props.scala:213)
at akka.actor.ActorCell.newActor(ActorCell.scala:562)
at akka.actor.ActorCell.create(ActorCell.scala:588)
... 9 common frames omitted
Caused by: java.util.NoSuchElementException: None.get
at scala.None$.get(Option.scala:347)
at scala.None$.get(Option.scala:345)
at services.MispSrv.<init>(MispSrv.scala:34)
at services.MispSrv$$FastClassByGuice$$52b14c8e.newInstance(<generated>)
at com.google.inject.internal.cglib.reflect.$FastConstructor.newInstance(FastConstructor.java:40)
at com.google.inject.internal.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:61)
at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:105)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:267)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1103)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:145)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:41)
at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:104)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:267)
at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1103)
at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:145)
at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:41)
at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:104)
at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:85)
at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:267)
at com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:1016)
at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092)
at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1012)
... 18 common frames omitted
Configuration:
# Secret key
# ~~~~~
# The secret key is used to secure cryptographics functions.
# If you deploy your application to several instances be sure to use the same key!
http.port=9001
play.crypto.secret="XXXXXXXXXXXXXXXXXXX"
analyzer {
path = "/opt/Cortex-Analyzers/analyzers"
config {
global {
proxy {
http="http://x:8080",
https="http://y:8080"
}
}
DNSDB {
server="https://api.dnsdb.info"
key="..."
}
DomainTools {
username="..."
key="..."
}
[...]
PassiveTotal {
key="..."
username=".."
}
MISP {
url="https://misp.local.dom"
key="..."
certpath=["/etc/ssl/private/misp.local.crt", ""]
name="instance-1"
}
}
}
Feature Request
OS version (server) : Centos 7
My server is not public. It is interal network and using proxy server. How to config cortex job analyzer run behind proxy?
Exxample : Run Job OTXquery -> Connection syn_sent (because i'm ussing proxy). -> return error
The HTTP client of Play can use proxy settings of the JVM but can't handle authentication. Please add support for proxy authentication.
Bug
Question | Answer |
---|---|
OS version (server) | Debian, Ubuntu |
Cortex version / git hash | 1.1.2 |
Package Type | deb |
After install Cortex package (deb) the service (Cortex) does not start and show a stack java error
After install cortex run this command:
sudo apt-get remove the openjdk-9-jre-headless
sudo apt-get install open-jk-8-jre-headless
sudo apt-get install cortex
The problem is that on Ubuntu 16.04 the cortex package installs openjdk-9 by default but cortex works with openjdk 8.
Thanks to Nabil Adouani for fix the problem :)
Bug
Question | Answer |
---|---|
Cortex version | 1.1.1 |
Package Type | any |
When a job is submitted, Cortex returns the created job, serialized in JSON. The serializer uses wrong format (attributes used are not correct). This makes TheHive raise JsResultException.
The error is introduced by commit 9f8f1eb in file app/models/JsonFormat.scala.
Bug
Binary
Hello, when trying to test cortex via binaries with thehive, I noticed that there was no install directory in the following packages
thehive-cortex-latest.zip and cortex-latest.zip
Thanks
Feature Request
Question | Answer |
---|---|
OS version (server) | Debian |
OS version (client) | Ubuntu |
Cortex version / git hash | thehive-2.10.2 |
Package Type | Binary |
Browser type & version | Chrome |
It would be great if observable "links" could be analysed with cuckoo plattform
https://cuckoosandbox.org/
Feature Request by @crackytsi
Question | Answer |
---|---|
OS version (server) | Debian |
OS version (client) | 8.9 |
TheHive version / git hash | 2.13.1 |
Package Type | DEB |
Currently misp-modules can only completely or not disabled.
It would be helpfull to be able to disable selected modules.
Especially if some modules makes no sense or do not work as they should.
Bug
Question | Answer |
---|---|
OS version (server) | CentOS |
OS version (client) | Win 7 |
Cortex version / git hash | 1.1.4, hash of the commit |
Package Type | Binary |
Browser type & version | Chrome 61 |
When clicking out of the "New Analysis" box the following error appears in the bottom right corner
Bug
Question | Answer |
---|---|
OS version (server) | Ubuntu 16.04 |
OS version (client) | Windows 7 |
Cortex version / git hash | 1.1.3-1 |
Package Type | Binary (.deb) |
Browser type & version | Opera 46.0 |
I just finished to install TheHive-Cortex bundle on my server. I installed the analyzers too, regarding the official documentation.
For some analyzers, it works well ! But for a few one, i've got the message "TLP is higher than allowed" coming from analyzer python file. I don't know why.
For example, i've got this message when i use on a file (observable) the analyzer Vmray, or on a URL the analyzer WOT_Lookup.
I didn't find any information on this message, sorry if it's obvious...
Bug
Any
If a user accesses to a job details page of a delated job, Cortex should redirect to jobs list instead of displaying en empty job details page
Feature Request
NA
As stated in #2, anyone can access Cortex with no authentication. Anonymous users/services can run analyzers and consume quotas/queries and that is not desirable.
It must be possible to change or lock down the API key if it is compromised/leaked.
Change the sample configuration for MISP in application.sample.
New Configuration should look like this :
#url=["https://mymispserver_1", "https://mymispserver_2"]
#key=["mykey_1", "mykey_2" ]
#certpath=["", ""]
#name=["misp_server_name_1", "misp_server_name_2"]
Feature Request
NA
Currently, the cortex doesn't require any sort of auth. It would be nice if authentication between an upstream service could be required. So that people can't just hit the cortex URL and anonymously run analyzers.
At a minimum, using a secret key in the hive config that is passed in all requests to the cortex would be a start, that would require some info on enabled https for the cortex (I was able to mimic the https setup for the hive, so that won't be an issue).
A more verbose way would be to provide the auth information from thehive, and pass it to the cortex which then accesses the elastic search backend (or other auth backend) to gather keys or other info, and record metrics etc.
Bug
Question | Answer |
---|---|
OS version (server) | Ubuntu |
OS version (client) | OS X |
Cortex version / git hash | latest |
Package Type | Binary (Debian Package) |
Browser type & version | N/A |
The Yeti analyzer doesn't have the default config in the application.conf
The Yeti analyzer doesn't allow for configuration of an API key. By default, Yeti is configured with NO auth. By deleting the default Yeti user, access to the UI / API requires an API key. The analyzer should be configured to allow API level access via token auth.
Have a config sysntax in the /etc/cortex/application.conf that provides:
Enhancement
This issue is related to adding a page loader to Cortex UI, like what is made on TheHive.
Configure SBT to create:
The produced files should be able to be published.
The aim of this task is to, optimize imports, follow code style guide, add return type for public methods, ...
Feature Request
Currently, as far as I can tell, if we get a new configuration for analyzers, it requires us to stop and start cortex. This could cause failures on currently running analysis, that shouldn't have to happen. Instead a mechanism to either check conf every N seconds (configurable) for changes, and if change reload, or manually request reload of config would be helpful to ensure currently running analysis would not be interrupted .
Bug
Question | Answer |
---|---|
OS version (server) | RedHat 6.9 |
OS version (client) | N/A |
Cortex version / git hash | 1.1.4 |
Package Type | RPM |
Browser type & version | N/A |
Analyzer File_Info erroring out processing a file
1.Opened up file_info analyzer on the cortex
2. Selected TLP:White Data Type:File then put a file with a sample trojan inside.
3.{
"errorMessage": "Unexpected Error: 'module' object has no attribute 'hash_file'",
"input": {
"tlp": 0,
"dataType": "file",
"content-type": "application/pdf",
"filename": "secured document.pdf",
"file": "/tmp/cortex-4878520860249705835-datafile",
"config": {
"max_tlp": 3,
"check_tlp": false,
"service": ""
}
},
"success": false
}
Documentation Request
You have a link in your wiki - https://github.com/CERT-BDF/Cortex/wiki/How-to-create-an-analyzer
This currently doesn't go to anything, these instructions would be particularly useful to those who would like to start contributing analyzers to Cortex right now.
Bug
Question | Answer |
---|---|
Cortex version | 1.1.0 |
Package Type | Any |
The 1.1.0 should have included the new Cortex logo which has not been packaged during the release process.
Bug
Question | Answer |
---|---|
OS version (server) | Debian 8 |
Cortex version / git hash | 1.1.1-2 |
Package Type | Debian Package |
If you manually submit a Job in Cortex via webfrontend, the Job Status is not refreshed. So the new Status only appears if you reclick on the joblist.
By the way: At the very bottom of the page appears: VERSION: Snapshot
I would expect Version 1.1.1...
Add an API to communicate with MISP and run analyzers and mips-modules.
Enhancement
The idea is to display in the list of analyzers, the author and license of each analyzer when available
FR
If necessary data the analyzer needs such as username, password or an api key are defined through a config file (the json file or so...), the analyzer should only be displayed if all as required marked config values are set.
Maybe this is something to consider together with
From my point of view, this is cortex sided so I added the issue here.
Bug
Question | Answer |
---|---|
OS version (server) | Debian stretch/sid |
OS version (client) | MacOS |
Cortex version / git hash | cortex 1.1.4-1 |
Package Type | Binary, thehive 2.13.2-1 |
I installed the TheHive and can't get the Cortex API to give me anything other than
A client error occurred on GET /api/analyzer : Resource not found by Assets controller
Accessing Cortex and TheHive through the WUI works just fine but not through API calls.
Curl request:
$ curl -v http://cortex-1-1.company.com:9000/api/analyzer
* Trying 10.4.24.12...
* TCP_NODELAY set
* Connected to cortex-1-1.company.com (10.4.24.12) port 9000 (#0)
> GET /api/analyzer HTTP/1.1
> Host: cortex-1-1.company.com:9000
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Set-Cookie: XSRF-TOKEN=e72aad4a879067855b4debd3cd2d2b2ff4c2cfa3-1509762781220-fcb9c302f8723d6ff4ac6b00; Path=/
< Date: Sat, 04 Nov 2017 02:33:01 GMT
< Content-Type: text/plain; charset=UTF-8
< Content-Length: 86
<
* Connection #0 to host cortex-1-1.company.com left intact
A client error occurred on GET /api/analyzer : Resource not found by Assets controller
Library request:
>>> from cortex4py.api import CortexApi
>>> api = CortexApi('http://cortex-1-1.company.com:9000', cert=False)
>>> api.get_analyzers()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/site-packages/cortex4py/api.py", line 75, in get_analyzers
self.__handle_error(e)
File "/usr/local/lib/python3.6/site-packages/cortex4py/api.py", line 51, in __handle_error
raise_from(CortexException("Unexpected exception"), exception)
File "/usr/local/lib/python3.6/site-packages/future/utils/__init__.py", line 398, in raise_from
exec(execstr, myglobals, mylocals)
File "<string>", line 1, in <module>
cortex4py.api.CortexException: Unexpected exception
Unexpected exception
I don't know. I can't find anything in the logs to help point to an issue.
I tailed these log files and then made the API requests (curl, cortex4py).
thehive@cortex-1-1:~$ tail -f /var/log/cortex/application.log /var/log/elasticsearch/hive*.log /var/log/thehive/application.log
==> /var/log/cortex/application.log <==
2017-11-01 00:10:02,290 [INFO] from services.ExternalAnalyzerSrv in application-akka.actor.default-dispatcher-200 - Register analyzer DomainTools_WhoisLookup_IP 2.0 (DomainTools_WhoisLookup_IP_2_0)
2017-11-01 00:10:02,291 [INFO] from services.ExternalAnalyzerSrv in application-akka.actor.default-dispatcher-200 - Register analyzer DomainTools_ReverseNameServer 2.0 (DomainTools_ReverseNameServer_2_0)
2017-11-01 00:10:02,296 [INFO] from services.ExternalAnalyzerSrv in application-akka.actor.default-dispatcher-200 - Register analyzer DomainTools_ReverseIP 2.0 (DomainTools_ReverseIP_2_0)
2017-11-01 00:10:02,300 [INFO] from services.ExternalAnalyzerSrv in application-akka.actor.default-dispatcher-200 - Register analyzer DomainTools_ReverseWhois 2.0 (DomainTools_ReverseWhois_2_0)
2017-11-01 00:10:02,303 [INFO] from services.ExternalAnalyzerSrv in application-akka.actor.default-dispatcher-200 - Register analyzer MISP 2.0 (MISP_2_0)
2017-11-01 00:10:02,305 [INFO] from services.ExternalAnalyzerSrv in application-akka.actor.default-dispatcher-200 - Register analyzer CERTatPassiveDNS 2.0 (CERTatPassiveDNS_2_0)
2017-11-01 00:21:59,290 [INFO] from services.ExternalAnalyzerSrv in application-analyzer-216 - Execute sh -c "./WOT_lookup.py" in WOT
2017-11-01 00:21:59,298 [INFO] from services.ExternalAnalyzerSrv in application-analyzer-215 - Execute sh -c "./passivetotal_analyzer.py" in PassiveTotal
2017-11-01 00:21:59,302 [INFO] from services.ExternalAnalyzerSrv in application-analyzer-217 - Execute sh -c "./hippo.py" in Hippocampe
2017-11-01 00:21:59,314 [INFO] from services.ExternalAnalyzerSrv in application-analyzer-218 - Execute sh -c "./safebrowsing_analyzer.py" in GoogleSafebrowsing
==> /var/log/elasticsearch/hive-2017-09-27.log <==
[2017-09-27T06:06:21,193][INFO ][o.e.n.Node ] [Mu2gqAX] starting ...
[2017-09-27T06:06:23,040][INFO ][o.e.t.TransportService ] [Mu2gqAX] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2017-09-27T06:06:26,537][INFO ][o.e.c.s.ClusterService ] [Mu2gqAX] new_master {Mu2gqAX}{Mu2gqAXqQz23sDcdhTBChw}{0AMSnT2YRIy82MvbxWVWiA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-09-27T06:06:26,655][INFO ][o.e.h.n.Netty4HttpServerTransport] [Mu2gqAX] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2017-09-27T06:06:26,655][INFO ][o.e.n.Node ] [Mu2gqAX] started
[2017-09-27T06:06:26,694][INFO ][o.e.g.GatewayService ] [Mu2gqAX] recovered [0] indices into cluster_state
[2017-09-27T06:06:47,812][INFO ][o.e.n.Node ] [Mu2gqAX] stopping ...
[2017-09-27T06:06:47,950][INFO ][o.e.n.Node ] [Mu2gqAX] stopped
[2017-09-27T06:06:47,950][INFO ][o.e.n.Node ] [Mu2gqAX] closing ...
[2017-09-27T06:06:48,000][INFO ][o.e.n.Node ] [Mu2gqAX] closed
==> /var/log/elasticsearch/hive-2017-10-24.log <==
[2017-10-24T14:53:17,177][INFO ][o.e.n.Node ] [Mu2gqAX] starting ...
[2017-10-24T14:53:17,403][INFO ][o.e.t.TransportService ] [Mu2gqAX] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2017-10-24T14:53:20,494][INFO ][o.e.c.s.ClusterService ] [Mu2gqAX] new_master {Mu2gqAX}{Mu2gqAXqQz23sDcdhTBChw}{OdYfKRLoTsubjaTXvWQUEw}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-10-24T14:53:20,516][INFO ][o.e.h.n.Netty4HttpServerTransport] [Mu2gqAX] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2017-10-24T14:53:20,516][INFO ][o.e.n.Node ] [Mu2gqAX] started
[2017-10-24T14:53:20,527][INFO ][o.e.g.GatewayService ] [Mu2gqAX] recovered [0] indices into cluster_state
[2017-10-24T14:54:28,661][INFO ][o.e.n.Node ] [Mu2gqAX] stopping ...
[2017-10-24T14:54:28,734][INFO ][o.e.n.Node ] [Mu2gqAX] stopped
[2017-10-24T14:54:28,734][INFO ][o.e.n.Node ] [Mu2gqAX] closing ...
[2017-10-24T14:54:28,789][INFO ][o.e.n.Node ] [Mu2gqAX] closed
==> /var/log/elasticsearch/hive-2017-10-26.log <==
[2017-10-26T15:19:23,592][INFO ][o.e.n.Node ] [Mu2gqAX] starting ...
[2017-10-26T15:19:24,108][INFO ][o.e.t.TransportService ] [Mu2gqAX] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2017-10-26T15:19:27,232][INFO ][o.e.c.s.ClusterService ] [Mu2gqAX] new_master {Mu2gqAX}{Mu2gqAXqQz23sDcdhTBChw}{AmjP6OeeSnqyHpWGT7-emQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-10-26T15:19:27,315][INFO ][o.e.g.GatewayService ] [Mu2gqAX] recovered [0] indices into cluster_state
[2017-10-26T15:19:27,329][INFO ][o.e.h.n.Netty4HttpServerTransport] [Mu2gqAX] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2017-10-26T15:19:27,329][INFO ][o.e.n.Node ] [Mu2gqAX] started
[2017-10-26T15:21:13,944][INFO ][o.e.n.Node ] [Mu2gqAX] stopping ...
[2017-10-26T15:21:14,063][INFO ][o.e.n.Node ] [Mu2gqAX] stopped
[2017-10-26T15:21:14,063][INFO ][o.e.n.Node ] [Mu2gqAX] closing ...
[2017-10-26T15:21:14,104][INFO ][o.e.n.Node ] [Mu2gqAX] closed
==> /var/log/elasticsearch/hive_deprecation.log <==
[2017-10-26T15:16:00,891][WARN ][o.e.d.e.NodeEnvironment ] ES has detected the [path.data] folder using the cluster name as a folder [/var/lib/elasticsearch], Elasticsearch 6.0 will not allow the cluster name as a folder within the data path
[2017-10-26T15:16:03,511][WARN ][o.e.d.c.s.Settings ] [script.inline] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
[2017-10-26T15:19:10,728][WARN ][o.e.d.e.NodeEnvironment ] ES has detected the [path.data] folder using the cluster name as a folder [/var/lib/elasticsearch], Elasticsearch 6.0 will not allow the cluster name as a folder within the data path
[2017-10-26T15:19:16,387][WARN ][o.e.d.c.s.Settings ] [script.inline] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
[2017-10-30T21:51:07,057][WARN ][o.e.d.e.NodeEnvironment ] ES has detected the [path.data] folder using the cluster name as a folder [/var/lib/elasticsearch], Elasticsearch 6.0 will not allow the cluster name as a folder within the data path
[2017-10-30T21:51:13,085][WARN ][o.e.d.c.s.Settings ] [script.inline] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
[2017-10-30T21:58:29,717][WARN ][o.e.d.e.NodeEnvironment ] ES has detected the [path.data] folder using the cluster name as a folder [/var/lib/elasticsearch], Elasticsearch 6.0 will not allow the cluster name as a folder within the data path
[2017-10-30T21:58:38,293][WARN ][o.e.d.c.s.Settings ] [script.inline] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
[2017-10-30T22:27:24,122][WARN ][o.e.d.e.NodeEnvironment ] ES has detected the [path.data] folder using the cluster name as a folder [/var/lib/elasticsearch], Elasticsearch 6.0 will not allow the cluster name as a folder within the data path
[2017-10-30T22:27:30,237][WARN ][o.e.d.c.s.Settings ] [script.inline] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
==> /var/log/elasticsearch/hive_index_indexing_slowlog.log <==
==> /var/log/elasticsearch/hive_index_search_slowlog.log <==
==> /var/log/elasticsearch/hive.log <==
[2017-10-30T22:27:29,207][INFO ][o.e.p.PluginsService ] [Mu2gqAX] no plugins loaded
[2017-10-30T22:27:37,468][INFO ][o.e.d.DiscoveryModule ] [Mu2gqAX] using discovery type [zen]
[2017-10-30T22:27:39,227][INFO ][o.e.n.Node ] initialized
[2017-10-30T22:27:39,228][INFO ][o.e.n.Node ] [Mu2gqAX] starting ...
[2017-10-30T22:27:39,877][INFO ][o.e.t.TransportService ] [Mu2gqAX] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2017-10-30T22:27:43,069][INFO ][o.e.c.s.ClusterService ] [Mu2gqAX] new_master {Mu2gqAX}{Mu2gqAXqQz23sDcdhTBChw}{nEUlWNRUS1Suvo8R4uZ3Ig}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-10-30T22:27:43,153][INFO ][o.e.h.n.Netty4HttpServerTransport] [Mu2gqAX] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2017-10-30T22:27:43,154][INFO ][o.e.n.Node ] [Mu2gqAX] started
[2017-10-30T22:27:43,864][INFO ][o.e.g.GatewayService ] [Mu2gqAX] recovered [1] indices into cluster_state
[2017-10-30T22:27:44,973][INFO ][o.e.c.r.a.AllocationService] [Mu2gqAX] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[the_hive_11][2]] ...]).
==> /var/log/thehive/application.log <==
2017-11-04 03:30:47,587 [INFO] from org.elasticsearch.plugins.PluginsService in main - loaded plugin [org.elasticsearch.percolator.PercolatorPlugin]
2017-11-04 03:30:47,588 [INFO] from org.elasticsearch.plugins.PluginsService in main - loaded plugin [org.elasticsearch.script.mustache.MustachePlugin]
2017-11-04 03:30:47,588 [INFO] from org.elasticsearch.plugins.PluginsService in main - loaded plugin [org.elasticsearch.transport.Netty3Plugin]
2017-11-04 03:30:47,588 [INFO] from org.elasticsearch.plugins.PluginsService in main - loaded plugin [org.elasticsearch.transport.Netty4Plugin]
2017-11-04 03:30:49,813 [INFO] from io.netty.util.internal.PlatformDependent in main - Your platform does not provide complete low-level API for accessing direct buffers reliably. Unless explicitly requested, heap buffer will always be preferred to avoid potential system instability.
2017-11-04 03:30:51,561 [INFO] from connectors.cortex.services.CortexClient in main - new Cortex(LOCAL CORTEX, http://localhost:9999, ) Basic Auth enabled: false
2017-11-04 03:30:51,591 [INFO] from connectors.cortex.services.CortexSrv in main - Search for unfinished job ...
2017-11-04 03:30:51,970 [INFO] from connectors.cortex.services.CortexSrv in application-akka.actor.default-dispatcher-3 - 0 jobs found
2017-11-04 03:30:53,246 [INFO] from play.api.Play in main - Application started (Prod)
2017-11-04 03:30:53,934 [INFO] from play.core.server.AkkaHttpServer in main - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
Bug
Any
Cortex UI uses an open source angular library to display notification toasts: https://github.com/alexcrack/angular-ui-notification
This library introduce a XSS vulnerability, since it trusts the messages to be displayed, as HTML.
An issue is still open to fix this vulnerability
In the meantime, we will make sure to sanitize the content we display in notification toasts
Feature Request
NA
The current version of Cortex (1.0.0) does not support rate-limiting nor quota alerts for analyzers that rely on quota-based services.
This is a problem as analysts may consume the number of queries per time period (day/month...) or inadvertently cause abuse on free services by sending too many queries.
The quota alerts should be shown in the Web UI next to each analyzer description and provided as a response to an API query (so that it can be displayed in TheHive).
TheHive should also have a page where analysts can view the query consumption per analyzer and how many are left without having to connect to the Cortex Web UI.
Bug
Question | Answer |
---|---|
OS version (client) | Ubuntu |
I dump files from HTTP requests in a PCAP file using a python tool CapTipper. After dumping file in one of a folder I scan hash of each file using a python module virustotal. I get this error.
{
"errorMessage": "Error: Invalid output\nCapTipper v0.3 b13 - Malicious HTTP traffic explorer tool\nCopyright 2015 Omri Herscovici <[email protected]>\n\n[A] Analyzing PCAP: /tmp/cortex-5967164689721189551-datafile\n\n[+] Traffic Activity Time: Wed, 07/07/10 03:16:19\n[+] Conversations Found:\n\n0: \u001b[35m / \u001b[0;0m -> text/html \u001b[0;0m(0.html)\u001b[0;0m [41.4 KB] (Magic: \u001b[1mGZ\u001b[22m)\n1: \u001b[35m /sd/idlecore-tidied.css?T_2_5_0_300 \u001b[0;0m -> text/css \u001b[0;0m(idlecore-tidied.css)\u001b[0;0m [0.0 B] \n2: \u001b[35m /sd/print.css?T_2_5_0_300 \u001b[0;0m -> text/css \u001b[0;0m(print.css)\u001b[0;0m [0.0 B] \n3: \u001b[35m /sd/twitter_icon.png \u001b[0;0m -> image/png \u001b[32m(twitter_icon.png)\u001b[0;0m [0.0 B] \n4: \u001b[35m /sd/facebook_icon.png \u001b[0;0m -> image/png \u001b[32m(facebook_icon.png)\u001b[0;0m [3.4 KB] (Magic: \u001b[1mPNG\u001b[22m)\n5: \u001b[35m /sd/cs_sic_controls_new.png?T_2_5_0_299 \u001b[0;0m -> image/png \u001b[32m(cs_sic_controls_new.png)\u001b[0;0m [0.0 B] \n6: \u001b[35m /sd/cs_i2_gradients.png?T_2_5_0_299 \u001b[0;0m -> image/png \u001b[32m(cs_i2_gradients.png)\u001b[0;0m [0.0 B] \n7: \u001b[35m /sd/logo2.png \u001b[0;0m -> image/png \u001b[32m(logo2.png)\u001b[0;0m [0.0 B] \n\n GZIP Decompression of object 0 (0.html) successful!\n New object created: 5\n\n Object 0 written to /home/asfandyar/Documents/Cap_Tipper/0-0.html\n\u001b[31m\n[E] Object: 1 (idlecore-tidied.css) : Response body was empty\u001b[0;0m\n\n Object 1 written to /home/asfandyar/Documents/Cap_Tipper/1-idlecore-tidied.css\n\u001b[31m\n[E] Object: 2 (print.css) : Response body was empty\u001b[0;0m\n\n Object 2 written to /home/asfandyar/Documents/Cap_Tipper/2-print.css\n\u001b[31m\n[E] Object: 3 (twitter_icon.png) : Response body was empty\u001b[0;0m\n\n Object 3 written to /home/asfandyar/Documents/Cap_Tipper/3-twitter_icon.png\n Object 4 written to /home/asfandyar/Documents/Cap_Tipper/4-facebook_icon.png\n Object 5 written to /home/asfandyar/Documents/Cap_Tipper/5-ungzip-0.html\n{\"artifacts\": [{\"type\": \"url\", \"value\": \"https://www.virustotal.com/file/c087d878b15538b92d20ea650317c36acc26cdaf5bb87408da144f856fd266be/analysis/1510913190/\"}, {\"type\": \"url\", \"value\": \"https://www.virustotal.com/file/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855/analysis/1511443342/\"}, {\"type\": \"url\", \"value\": \"https://www.virustotal.com/file/054b71965ff26b3cfc5305a2e5d7fa29ebf123518f3992e2a7a29c251410aadc/analysis/1282392885/\"}, {\"type\": \"url\", \"value\": \"https://www.virustotal.com/file/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855/analysis/1511443342/\"}, {\"type\": \"url\", \"value\": \"https://www.virustotal.com/file/b4a88d47cd933f7c0de51ac243ef8ff9b89554c2a88d2395e170f36ea3042e07/analysis/1510916881/\"}, {\"type\": \"url\", \"value\": \"https://www.virustotal.com/file/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855/analysis/1511443342/\"}], \"full\": {\"RESULT\": [{\"Antivirus' total : \": 60, \"Permalink : \": \"https://www.virustotal.com/file/c087d878b15538b92d20ea650317c36acc26cdaf5bb87408da144f856fd266be/analysis/1510913190/\", \"Antivirus's positives : \": 0, \"Resource's status : \": \"Scan finished, information embedded\", \"FILE NAME : \": \"/home/asfandyar/Documents/Cap_Tipper/0-0.html\"}, {\"Antivirus' total : \": 61, \"Permalink : \": \"https://www.virustotal.com/file/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855/analysis/1511443342/\", \"Antivirus's positives : \": 0, \"Resource's status : \": \"Scan finished, information embedded\", \"FILE NAME : \": \"/home/asfandyar/Documents/Cap_Tipper/3-twitter_icon.png\"}, {\"Antivirus' total : \": 42, \"Permalink : \": \"https://www.virustotal.com/file/054b71965ff26b3cfc5305a2e5d7fa29ebf123518f3992e2a7a29c251410aadc/analysis/1282392885/\", \"Antivirus's positives : \": 0, \"Resource's status : \": \"Scan finished, information embedded\", \"FILE NAME : \": \"/home/asfandyar/Documents/Cap_Tipper/4-facebook_icon.png\"}, {\"Antivirus' total : \": 61, \"Permalink : \": \"https://www.virustotal.com/file/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855/analysis/1511443342/\", \"Antivirus's positives : \": 0, \"Resource's status : \": \"Scan finished, information embedded\", \"FILE NAME : \": \"/home/asfandyar/Documents/Cap_Tipper/1-idlecore-tidied.css\"}, {\"Antivirus' total : \": 59, \"Permalink : \": \"https://www.virustotal.com/file/b4a88d47cd933f7c0de51ac243ef8ff9b89554c2a88d2395e170f36ea3042e07/analysis/1510916881/\", \"Antivirus's positives : \": 0, \"Resource's status : \": \"Scan finished, information embedded\", \"FILE NAME : \": \"/home/asfandyar/Documents/Cap_Tipper/5-ungzip-0.html\"}, {\"Antivirus' total : \": 61, \"Permalink : \": \"https://www.virustotal.com/file/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855/analysis/1511443342/\", \"Antivirus's positives : \": 0, \"Resource's status : \": \"Scan finished, information embedded\", \"FILE NAME : \": \"/home/asfandyar/Documents/Cap_Tipper/2-print.css\"}]}, \"success\": true, \"summary\": {}}\n",
"success": false
}
Bug
Question | Answer |
---|---|
OS version (server) | Ubuntu 16.0.4 LTS |
OS version (client) | OS X |
Cortex version / git hash | Latest |
Package Type | Source |
Browser type & version | Chrome (N/A) |
Enabling the Shodan analyzer fails with an error of:
Traceback (most recent call last):
File "./shodan_analyzer.py", line 3, in <module>
from cortexutils.analyzer import Analyzer
ImportError: No module named 'cortexutils'
To confirm it is installed:
root@ip-10-10-2-40:/opt/Cortex-Analyzers/analyzers/Shodan# pip search cortexutils
cortexutils (1.2.0) - A Python library for including utility classes for Cortex analyzers
INSTALLED: 1.2.0 (latest)
Enhancement
Use the status
Cortex Api to get and display the instance version on the footer
MISP modules initialization could take times. To speed-up the startup of Cortex this initialization is lazy (done only when it is needed).
This make the first access to Cortex analyzer list very long.
Initialization should begin when Cortex starts but in a separate thread in order to not slow down Cortex startup.
Bug
Navigating from a page another should scroll automatically to top of the new page.
Feature Request
N/A
Cortex as of 1.x doesn't have a (simple) process for:
Moreover, Cortex does not currently allow teams to:
Create an analyzer store much like a mobile app store that allow teams to:
Bug
When starting an analysis that fails from Cortex's UI, no error message is displayed.
Feature Request
In the application config, there is a "path to analyzers" field. It would be nice to be able to provide "N" number of paths to analyzers so that we could continue to use the built in analyzers, but also provide other locations that are checked. Each path to analyzers would have it's own config associated (We would have to determine if Global was truly global or global PER path to analyzer).
This would allow folks to create alternate repos of analyzers that could be shared in private circles, whether trust groups, whether internally at an org etc, without having to go through a merging process every time new analyzers are added to the main repo. Basically, Analyzers would first be checked in order, and when the config is parsed, analyzers would have to be globally unique from a naming perspective (Failure causes exit).
It would make ease of upgrade, addition, and management of analyzers much easier at scale.
Bug
Question | Answer |
---|---|
OS version (server) | CentOS 7 (Kernel 3.10.0-514) |
Cortex version / git hash | 1.1.1 |
Package Type | Binary |
Describe the problem/bug as clearly as possible.
{
"errorMessage": "Error: Invalid output\nTraceback (most recent call last):\n File "./abusefinder.py", line 8, in \n from cortexutils.analyzer import Analyzer\nImportError: No module named cortexutils.analyzer\n",
"success": false
}
Change #!/usr/bin/env python to use the python version of the virtual environment we have to use with TheHive and Cortex
Bug
Any
The jobs list size is set to 10 items by default, and the API's query param is not taken into account.
Bug
Question | Answer |
---|---|
OS version (server) | Debian 8 |
Cortex version / git hash | 1.1.1-2 |
Package Type | Debian Package |
Hello,
Thanks a lot for your really, really good work!!!
Sorry maybe its my fault, but I don't have any further idear, so I use this way to adress it:
May 19 21:02:46 debian-8-user cortex[19470]: import misp_modules
May 19 21:02:46 debian-8-user cortex[19470]: ImportError: No module named 'misp_modules'
May 19 21:02:46 debian-8-user cortex[19470]: [#033[37minfo#033[0m] application - GET /api/analyzer returned 500
May 19 21:02:46 debian-8-user cortex[19470]: java.lang.RuntimeException: Nonzero exit value: 1
May 19 21:02:46 debian-8-user cortex[19470]: at scala.sys.package$.error(package.scala:27)
May 19 21:02:46 debian-8-user cortex[19470]: at scala.sys.process.ProcessBuilderImpl$AbstractBuilder.slurp(ProcessBuilderImpl.scala:132)
May 19 21:02:46 debian-8-user cortex[19470]: at scala.sys.process.ProcessBuilderImpl$AbstractBuilder.$bang$bang(ProcessBuilderImpl.scala:102)
May 19 21:02:46 debian-8-user cortex[19470]: at services.MispSrv.list$lzycompute(MispSrv.scala:46)
May 19 21:02:46 debian-8-user cortex[19470]: at services.MispSrv.list(MispSrv.scala:45)
May 19 21:02:46 debian-8-user cortex[19470]: at services.AnalyzerSrv.list(AnalyzerSrv.scala:18)
May 19 21:02:46 debian-8-user cortex[19470]: at controllers.AnalyzerCtrl$$anonfun$list$1.apply(AnalyzerCtrl.scala:19)
May 19 21:02:46 debian-8-user cortex[19470]: at controllers.AnalyzerCtrl$$anonfun$list$1.apply(AnalyzerCtrl.scala:18)
May 19 21:02:46 debian-8-user cortex[19470]: at play.api.mvc.ActionBuilder$$anonfun$apply$13.apply(Action.scala:371)
May 19 21:02:46 debian-8-user cortex[19470]: at play.api.mvc.ActionBuilder$$anonfun$apply$13.apply(Action.scala:370)
May 19 21:02:47 debian-8-user cortex[19470]: Traceback (most recent call last):
May 19 21:02:47 debian-8-user cortex[19470]: File "/opt/cortex/contrib/misp-modules-loader.py", line 10, in <module>
analyzer {
path = "/opt/Cortex-Analyzers/analyzers"
config {
...
}
AND
misp.modules {
enabled = true
config {
...
}
Bug
Question | Answer |
---|---|
OS version (server) | Debian 8 |
Cortex version / git hash | 1.1.3-1 |
Package Type | Binary |
Cortex Analysis of geoip_country leads to an endless Loop.
Excerpt:
Jun 30 13:59:37 server cortex[29365]: at [Source: 2017-06-30 13:57:12,616 - geoip_country - DEBUG - 62.210.15.114
Jun 30 13:59:37 server cortex[29365]: {"error": "GeoIP resolving error"}
Jun 30 13:59:37 server cortex[29365]: ; line: 1, column: 6]
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1586)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:521)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:450)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.core.base.ParserMinimalBase._reportMissingRootWS(ParserMinimalBase.java:466)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._verifyRootSpace(ReaderBasedJsonParser.java:1598)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._parsePosNumber(ReaderBasedJsonParser.java:1248)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:705)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:3847)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3765)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2050)
Jun 30 13:59:37 server cortex[29365]: [#033[37minfo#033[0m] application - GET /api/job/aOXIJkMHdn51RHsP/waitreport?atMost=1%20minute returned 500
Jun 30 13:59:37 server cortex[29365]: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('-' (code 45)): Expected space separating root-level values
Jun 30 13:59:37 server cortex[29365]: at [Source: 2017-06-30 13:57:12,616 - geoip_country - DEBUG - 62.210.15.114
Jun 30 13:59:37 server cortex[29365]: {"error": "GeoIP resolving error"}
Jun 30 13:59:37 server cortex[29365]: ; line: 1, column: 6]
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1586)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:521)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:450)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.core.base.ParserMinimalBase._reportMissingRootWS(ParserMinimalBase.java:466)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._verifyRootSpace(ReaderBasedJsonParser.java:1598)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._parsePosNumber(ReaderBasedJsonParser.java:1248)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:705)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:3847)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3765)
Jun 30 13:59:37 server cortex[29365]: at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2050)
Jun 30 13:59:37 server cortex[29850]: [#033[37minfo#033[0m] a.e.s.Slf4jLogger - Slf4jLogger started
Jun 30 13:59:37 server cortex[29850]: [#033[37minfo#033[0m] s.MispSrv - MISP modules is enabled, loader is /opt/cortex/contrib/misp-modules-loader.py
Jun 30 13:59:37 server cortex[29850]: [#033[37minfo#033[0m] play.api.Play - Application started (Prod)
Jun 30 13:59:37 server cortex[29850]: [#033[37minfo#033[0m] p.c.s.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9001
Jun 30 13:59:38 server cortex[29850]: [#033[37minfo#033[0m] application - GET /api/job/aOXIJkMHdn51RHsP/waitreport?atMost=1%20minute returned 404
Enhancement
This task aims to include the new logo on Cortex user interface
Bug
any
The section "analyzer.config.global" in configuration file is not used to run analyzers. This section contains common items for all analyzers, in particular proxy configuration.
Duplicate the content of global section on all analyzer sections.
Have a way to disable analyzers in application.conf
instead of remove analyzers folders after update.
add the list in /etc/cortex/application.conf
:
[..]
analyzers {
config {
[..]
}
disabled = [
analyzer_name_1,
analyzer_name_4,
analyzer_name_12,
analyzer_name_N
]
}
[..]
Feature request
Debian, Cortext with systemd
Cortex analyzer are run using python-in-path.
There is no configuration to specify the python interpreter or virtualenv to use.
# use a cortext virtualenv for analyzer
Environment=PATH=/opt/cortex/virtual_env_path/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Bug Maybe
Question | Answer |
---|---|
OS version (server) | Ubuntu |
OS version (client) | Ubuntu 16.04 |
Cortex version / git hash | 1.1.0 |
Package Type | Binary & Deb |
Browser type & version | Firefox |
Yesterday i'm upgrading TheHive 2.10 - 2.11 all goes well | I have using "binary" deployment and all was ok.
Cortex engine is 1.0.1 and work with TheHive 2.11, I have my own analyzers, and work ok. But I tried to upgrade Cortex 1.0.1 to 1.1.0 Binary first and later, from deb package.
First| Problem, When I start Cortex doesnt show any analyzers , y check application.conf and all seems ok.
From package deb, the same problema, when I start Cortex-1.1.0 analyzers doesnt show, any analyzers.
I back to cortex 1.0.1 and work well | Maybe some bugs? any new config files to be configured?
Thanks in advance, I want to "release" my analyzers soon with us support!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.