tyruiop / syncretism Goto Github PK
View Code? Open in Web Editor NEWOps.Syncretism is an open source options search engine
Home Page: https://ops.syncretism.io
License: GNU Affero General Public License v3.0
Ops.Syncretism is an open source options search engine
Home Page: https://ops.syncretism.io
License: GNU Affero General Public License v3.0
Two options:
localStorage
to save these (simpler, no need for login/pass)When the user clicks on the contract symbol, he should be able to see charts plotting the historical data
Currently all options are crawled at the same rate, however some options have little to no variation over extended periods of time while others change rapidly during the day.
[ticker, expiration date]
pair becomes active and highOn the frontend, the user should be able to click on a column's header to sort by it (like on the old frontend)
Tracked options should be refreshed (e.g. once per hour for now).
Currently all the data is saved in date/TICKER.txt.gz
(which are compressed 1 edn per line files) as we crawl.
We need a way to generate and store time series based on said data.
This is crucial to be able to calculate the greeks, and compare options among themselves.
Users should be able to search by how much an option's feature deviates from its historical average (20d & 100d).
Also potentially implement paging instead of appending to same table
Not sure if this is resolved..
When i tried last night volume filter throws volume + more option with out of band volume filtered.
Now that some tests have been written, it would be nice if we got some CI going on.
If we create a PWA it will allow the user to
Note that the "state" should be able to be downloaded and then uploaded to restore settings.
There are currently no tests whatsoever in the codebase.
I need to write tests at least for
datops-crawler
datops-compute
Once the search engine becones a PWA, we should allow the user to select and track some options in particular, not to have to search them again every time.
Would like to have % IV changes to see option buys or option sells.
+IV % changes --- option buys.
-IV% change ---Option sells.
It is currently hardcoded in the config file of the crawler (and I think in the compute library), it should be automatically gathered.
The Home
view should display, for the moment, tracked options.
In the future we might want to include general statistics about the market, this issue is quite open for now.
Using a statement like
SELECT timeseries.contractSymbol, AVG(timeseries.delta) FROM timeseries LEFT JOIN live ON timeseries.contractsymbol = live.contractsymbol WHERE live.expiration > 1627721273 GROUP BY timeseries.contractSymbol;
at the end of each days, update the live
table with the average value of (most) columns as we go, to enable search by average.
Add two averages: 20days & 100 days averages for each column, for now.
This will allow identifying drops & peaks in movements of each contract.
In order to secure queries & improve performances:
?
like it is done for batch queries, for all queriesSee https://stackoverflow.com/questions/13116042/how-to-pass-a-variable-to-a-in-clause
User should be allowed to set a timer for a given filter (e.g. 1 / hour) and have it be automatically refreshed following that timer (only during market hours).
Solution:
Allow multiple "listing" views in the options
tab, each with its own associated fitler & timer.
It appears in both datops-crawler
and datops-compute
. Such code is likely to appear mode in the future (for greeks or other financial calculations). It would make sense to refactor it all in a separate library (e.g. datops-financial
).
Once the data is generate and available, create an API to be able to request this data.
Allow for filtering of data set by IV range as part of the query.
Do a form of excel lite, where one can:
Hey is this still crawling and updating? We are no longer getting data from the historical endpoint:
The crawler is gathering almost all required data to calculate the greeks as we crawl (the only one missing is the risk free interest rate, which can be hardcoded).
So let's add this calculation (tentatively) to the crawler. If it proves to be too demanding (given the crawler's speed requirement), we will move it to regular computation in the datops-compute
module.
Greeks' formulas can be found here https://www.macroption.com/black-scholes-formula/
The frontend is quite broken on smartphones, the worst being the Home
view and the charts.
We need to fix the CSS (and possibly the chart generation).
Dear Team,
Thanks for your effort.
The filter list is becoming quite long and will keep getting longer.
Solution would be to only show a short list of default filters, and have a way for the user to add more filters by selecting them among a list of available filters.
Currently, one needs the chain ID (i.e 'AAPL210730C00145000'). Would it be possible to either have an endpoint that returns all of those chain numbers based on the ticker, expiration and call/put, or be able to call historical based on those parameters.
e.g. be able to search options with a given filter and see their immediate successor on the ladder, e.g. strike 90 -> 95 for calls, or 90 -> 85 for puts
datops-compute
should calculate ladders at the end of each day (only for options that require it).contractsymbol
if the filter is active?)For each option, keep track of the volume average and add a filter allowing to search by volume variance.
The volume average calculation should be done at the same time as monthly yield update.
The standalone client should allow a variety of things:
Currently done by hand, need to create a crontab for it.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.