Dashboard for 0DTE opions
Designed to read a Pandas dataframe from a Parquet file if/when that file changes. This approach allows the web server to not need/make calls to other Internet services.
The Pandas Dataframe is a slightly trimed result of essentially the following:
polling.py Demonstrates browser polling server to get incremental data react.py
- The Plotly charts result in a total resend of the data. For the Pez dispenser, this is around 1mb. Ideally only the incremental data is sent. The browser can then figure out what to do after that.
- Until the incremental thing is fixd, the app uses extendable Graph, which maybe sucks.
- Create csv for each parquet file
- gzip json file
- Concat SPX GEX files into single file
gunicorn app:server -b :8050 --access-logfile access.log -D
Store When used as input or state, all data is transfered to server from client. Same applies on Output. A Client side script can manipulate the Store including setting it to Null. If set to Null in JS, Python server gets None
#############
https://stackoverflow.com/questions/65990492/what-is-the-correct-way-of-using-extenddata-in-dcc-graph
app.clientside_callback( """ function (n_intervals) { return [{x: [n_intervals], y: [2]}, [0] ] } """, Output('extend-graph', 'extendData'), Input('interval-component', 'n_intervals') ) ############# pattern = r'^(?P\w+)_(?P\d{2})(?P\d{2})(?P\d{2})(?P[PC])(?P\d+)$' df[['ticker', 'month', 'day', 'year', 'putCall', 'strikePrice']] = df['symbol'].str.extract(pattern)
import numpy as np import pandas as pd from scipy.stats import percentileofscore from utils import OptionQuotes filename ='../tda-tbd/wip/SPX.X.2023-06-15.GEX.parquet' filename ='../tda-tbd/data/SPX.X.2023-06-28.parquet' oq = OptionQuotes(symbol='abc',filename=filename) df_base = oq.reload() df = pd.read_parquet(filename) dfx = df_base.loc[(df.processDateTime < pd.to_datetime('2023-06-22 14:59:00-04:00'))] dfx = df.loc[(df.processDateTime == df.processDateTime.max())]
vix = pd.read_parquet('./vix.parquet') df = pd.merge(df, vix.vix, left_index=True, right_index=True)
df['priceRet_quartile'] = df['priceRet'].fillna(0).abs().rolling(window=100).apply( lambda x: pd.qcut(x, q=4, labels=['Q1', 'Q2', 'Q3', 'Q4'], duplicates='drop') ) df['priceRet_quartile'] = df['priceRet'].rolling(window=100).apply(calculate_quartiles)
def write_excel(df, filename): df = df.copy() for column in df.select_dtypes(include=['datetime64[ns]', 'datetime64[ns, US/Eastern]']): df[column] = df[column].dt.tz_localize(None) #df['Dates'] = df['Dates'].dt.strftime('%Y-%m-%d %H:%M:%S') df.to_excel(filename, index=False)
def df_priorOpenInterest(df): max_dt = df.processDateTime.max() #from_dt = pd.to_datetime('2023-06-15 00:00-0400') #to_dt = pd.to_datetime('2023-06-17 00:00-0400') #df = df.loc[(df.processDateTime == max_dt) & (df.expirationDate >= from_dt) & (df.expirationDate < to_date)] df = df.loc[(df.processDateTime == max_dt)] return df[['symbol','openInterest']]
dfx = pd.read_parquet('../tda-tbd/wip/SPX.X.2023-06-14.GEX.parquet') df_prior = dfx.loc[(dfx.processDateTime == dfx.processDateTime.max())][['symbol','openInterest']]
df_all = pd.read_parquet(filename) df_all['gex'] = df_all.openInterest * df_all.gamma df_all['priorOpenInterest'] = pd.merge(df_all[['symbol']], df_prior[['symbol', 'openInterest']], on='symbol', how='left')['openInterest']
today = datetime.datetime.now(pytz.timezone('US/Eastern')) to_date = today + datetime.timedelta(days=90) to_date = pd.to_datetime('2023-06-17', utc=True) max_dt = df_all.processDateTime.max() df = df_all.loc[(df_all.processDateTime == max_dt) & (df_all.expirationDate >= today) & (df_all.expirationDate < to_date) & (df_all.strikePrice >= 4100) & (df_all.strikePrice <= 4500)]
#df = df.groupby(['putCall', 'strikePrice']).gex.sum().to_frame() #dfx = df.groupby(['strikePrice', 'putCall']).agg({'gex':'sum','underlyingPrice':'mean'}) #df = df.reset_index() #df = df.loc[(df.gex > 0.0)]
dfx = df.loc[(df.putCall == 'PUT')] putGex = (dfx.gex * dfx.strikePrice).sum() / dfx.gex.sum() dfx = df.loc[(df.putCall == 'CALL')] callGex = (dfx.gex * dfx.strikePrice).sum() / dfx.gex.sum()
dfx.loc[(dfx.putCall == 'PUT'), 'gex'] *= -1
def download_vix(): df = pd.read_csv('./DIX.csv') end_date = df.date.max() start_date = df.date.min() vix = yf.download('^VIX', start=start_date, end=end_date) vix = vix.rename(columns={'Close': 'close', 'Volume': 'volume'}) vix = vix.rename_axis('date').reset_index() vix = vix[['date', 'close', 'volume']] vix.to_parquet('./vix.parquet') return vix vix = download_vix()