mozilla / tls-observatory Goto Github PK
View Code? Open in Web Editor NEWAn observatory for TLS configurations, X509 certificates, and more.
License: Mozilla Public License 2.0
An observatory for TLS configurations, X509 certificates, and more.
License: Mozilla Public License 2.0
When a certificate is valid for X number of domains, but one of them is invalid, the validation is set to false for the entire certificate.
The validationInfo should indicate for which domain it succeeds and fails, thus allowing a certificate to be trusted for "domain1.com" and not for "domain2.org".
Validate each certificate's chain of trust and store the error accordingly
Although not directly related to the TLS Observatory codebase, I'm placing this here for tracking purpose. A Minion plugin should be integrated to the Observatory to call its API as part of a Minion scan. The plugin should do long polling to retrieve results when available.
While validating against different trust stores, we are querying elasticsearch before the previous results become indexed and so some results get overwritten instead of processed and re-indexed.
Possible solution is to wait for them to get indexed before exiting HandleCertChain function and start pushing other certificates.
Go vet and golint all code to follow Golang conventions
Analyzers, retrievers and workers should log their activity to syslog, in the DAEMON facility.
Example: https://tls-observatory.services.mozilla.com/api/v1/certificate?id=449
{"validity": {
"notBefore": "2015-12-03T20:26:07.154764Z",
"notAfter": "2015-12-03T20:26:07.154765Z"
}}
This call should be preceded by a queue declaration:
https://github.com/mozilla/TLS-Observer/blob/master/src/tlsAnalyser/tlsAnalyser.go#L239
(returned errors should be checked as well).
The CT extension of certificates should be evaluated to verify the signatures in a new analysis worker.
Sample X509 extension:
CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1(0)
Log ID : A4:B9:09:90:B4:18:58:14:87:BB:13:A2:CC:67:70:0A:
3C:35:98:04:F9:1B:DF:B8:E3:77:CD:0E:C8:0D:DC:10
Timestamp : Nov 24 04:10:02.376 2015 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:98:A9:69:0D:E6:B0:9A:D9:61:47:7E:
4A:6A:80:B3:AA:A5:93:18:EF:88:63:F2:ED:B5:AA:72:
ED:4C:DB:71:21:02:21:00:F6:86:A3:83:4D:83:53:AB:
26:AE:3F:2D:28:D3:22:AB:E3:C9:86:A3:8B:A9:91:AE:
59:85:48:C7:FF:15:49:28
Signed Certificate Timestamp:
Version : v1(0)
Log ID : 68:F6:98:F8:1F:64:82:BE:3A:8C:EE:B9:28:1D:4C:FC:
71:51:5D:67:93:D4:44:D1:0A:67:AC:BB:4F:4F:FB:C4
Timestamp : Nov 24 04:10:02.392 2015 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:44:02:20:0B:91:93:5D:98:61:78:B8:00:17:68:AE:
C1:CA:0B:24:D4:46:8F:E1:E0:0F:D5:A2:FD:18:7E:05:
B9:2F:4E:0F:02:20:51:98:7C:10:2C:3F:D1:A8:8B:7E:
7D:7A:25:8C:5F:2C:E7:79:B5:3C:49:21:B7:28:6B:0D:
A0:AE:8D:D0:21:E9
Signed Certificate Timestamp:
Version : v1(0)
Log ID : 56:14:06:9A:2F:D7:C2:EC:D3:F5:E1:BD:44:B2:3E:C7:
46:76:B9:BC:99:11:5C:C0:EF:94:98:55:D6:89:D0:DD
Timestamp : Nov 24 04:10:02.643 2015 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:44:02:20:32:4E:28:EB:1F:A8:69:29:C7:D4:9D:CC:
B4:09:74:76:03:B3:9E:23:BC:9C:FD:87:FD:29:FB:89:
B5:7E:6C:BC:02:20:26:81:30:E3:FD:EF:5A:23:8F:C5:
58:FB:80:48:E3:AD:CE:D3:1B:A0:52:24:D0:3A:FD:14:
B8:3E:41:0F:8D:C4
Add lastSeenTimestamp
to connection documents
Rename collectionTimeStamp
and connectionTimeStamp
to firstSeenTimestamp
Lots of error responses on certificate retrieval. Maybe due to file descriptor exhaustion.
Possible fix with c3712e1. Must check it out though.
Create a module that interacts with PostgreSQL as a backend storage.
D1C339EA2784EB870F934FC5634E4AA9AD5505016401F26465D37A574663359F
Make the process more consistent by not relying on certificate chain ordering received from server.
Need to decide on how to process cipherscan output to have a valid and kibana-searchable elasticsearch schema for tls connection info.
current cipherscan output:
type TLSConnectionInfo struct {
Target string `json:"target"`
Timestamp string `json:"utctimestamp"`
ServerSide bool `json:"serverside"`
CipherSuites []Ciphersuite `json:"ciphersuite"`
}
type Ciphersuite struct {
Cipher string `json:"cipher"`
Protocols []string `json:"protocols"`
PubKey []string `json:"pubkey"`
SigAlg []string `json:"sigalg"`
Trusted string `json:"trusted"`
TicketHint string `json:"ticket_hint"`
OCSPStapling string `json:"ocsp_stapling"`
PFS string `json:"pfs"`
}
After taking a closer look at Kibana I saw that validation fails when the server sends us an incomplete certificate chain. One can see this type of cases by searching for an empty parentSignature.
I validated some of the results against SSLabs to be sure. These cases seem to be a lot and this can't be good :)
Write a worker that compares the certificate and ciphersuite of a target with the requirements from https://wiki.mozilla.org/Security/Server_Side_TLS and output a compliance grade between modern
, intermediate
and old
.
If invoked with a target grade, the worker should indicate what the site is failing to achieve that grade. For examples see https://github.com/jvehent/cipherscan/blob/master/analyze.py
$ python analyze.py -l modern -t accounts.firefox.com
accounts.firefox.com:443 has intermediate with bad ordering ssl/tls
and DOES NOT comply with the 'modern' level
Changes needed to match the modern level:
* remove cipher AES128-GCM-SHA256
* remove cipher AES128-SHA256
* remove cipher AES128-SHA
* remove cipher AES256-GCM-SHA384
* remove cipher AES256-SHA256
* remove cipher AES256-SHA
* remove cipher DES-CBC3-SHA
* disable TLSv1
* use DHE of at least 2048bits and ECC of at least 256bits
* consider enabling OCSP Stapling
If the scanners crash, they need to pick up incomplete work. One way to do that is to periodically run this query:
select pg_notify('scan_listener', ''||id ) from scans where completion_perc=0;
which will resend scan notifications to target that are still at 0% completion.
Make a worker than alerts when a certificate contains one of the following wildcards:
*.mozilla.org
*.mozilla.org
*.firefox.com
*.firefox.org
While not entirely confirmed, it is very much possible that the Observatory would be deployed as a Docker app in AWS. I imagine that each component would live in its own container and communicate via the database and SQS. It would be good to provide a Dockerfile for development purpose, which would pave the way to an easier production deployment.
Write provisioning code to create instances and deploy the observatory in aws
Extend current logging capabilities to be able to take advantage of remote syslog endpoints.
Manage 3rd party packages with https://github.com/tools/godep with proper version pining.
Add update logic for connection document: when new connection doc is different from already known one, new doc is stored and old one is deprecated by setting its obsoletedBy
attribute to the ID of the new doc.
Make a worker than alerts when an analyzed connection supports SSLv3
Write a worker that alerts when a Mozilla certificate expires in less than 7 days.
Redesign and rethink architectural decisions in order to provide the API required by Minion.
The main analyzers should published analyzed results into a rabbitmq worker queues, where task specific workers can process and alert.
Make a worker that detects certificates that have a validity period longer than 39 months.
I propose that we change the DB schema to store all domains from the subject
and subjectaltname
x509 fields under a single domains
columns, thus simplifying queries that want to look for both values (eg. WHERE subject ~ 'mozilla' OR subjectaltname ~ 'mozilla'
).
The subject column which now contains the CN value only should be modified to contain the whole X509 subject line: CN, O, OU, C, ...
Create a message broker module that utilises SQS/SNS.
AWS applies a limit to the number of concurrent connections a given RDS database can accept. On the t2.medium we use for testing, that appears to be set to 120. We should limit how many connections the api and scanner are allowed to open using SetMaxOpenConns()
, maybe by reusing the already present concurrency limit.
The API should check that the DB schema is correct at startup, and either create the database entirely, or apply changes between versions (like alter table in the future).
Write a command line client to the Observatory API to evaluate the certificate and ciphersuite of a target and returns the results in near real-time to the user.
Provide an API that can be queried and provide direct results to the caller.
Unless I'm mistaken, those columns are not used anymore, in favor of the trust
table. Can we remove them from the schema entirely?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.