Coder Social home page Coder Social logo

iam-abbas / reddit-stock-trends Goto Github PK

View Code? Open in Web Editor NEW
1.5K 85.0 199.0 1.69 MB

Fetch currently trending stocks on Reddit

License: MIT License

Python 43.58% JavaScript 0.44% HTML 12.76% Vue 23.85% TypeScript 15.06% Shell 0.22% Dockerfile 4.09%

reddit-stock-trends's Introduction

Reddit Stock Trends ๐Ÿ“ˆ

See trending stock tickers on Reddit and check Stock perfomance

GitHub issues

Backend

Reddit API

  • Get your reddit API credentials from here.
  • Follow this article to get your credentials.
  • Go to back/ directory.
  • Create a praw.ini file with the following
[ClientSecrets]
client_id=<your client id>
client_secret=<your client secret>
user_agent=<your user agent>

Note that the title of this section, ClientSecrets, is important because ticker_counts.py will specifically look for that title in the praw.ini file.

Local usage

  • Install required modules using pip install -r requirements.txt.
  • Run ticker_counts.py first.
  • Now run yfinance_analysis.py.
  • You will be able to find your results in data/ directory.
  • [Optional] Run wsgi.py to start a server that returns the data in JSON format. This step will generate the csv files if they don't already exist.

Frontend - Vue

There's also a JavaScript web app that shows some data visualizations.

Local usage

Start the local server. This server will generate the csv files if they don't already exist.

cd back
python wsgi.py

Then, launch the client

cd front
cp .env.example .env
npm install
npm run serve

You can change the env variables if you need to

Docker usage

  • Requires Docker 17.09.0+ and docker-compose 1.17.0+
  • make sure you have .env or a system wide export for the var VUE_APP_API_URL, see env.example
  • docker-compose up -d to start all services in the background. (-d == detatched)
  • docker-compose up <servicename> to bring up a specific service.
  • docker-compose rm removes the image
  • docker-compose ps to list running services
  • docker-compose up --build if you made changes to the compose or dockerfiles and need it built into a new image.
  • docker-compose build --no-cache if you made changes to specific files and docker does not recognize as new thereby reusing cached layers incorrectly. This takes about 3 minutes on a gaming pc with 0 docker tuning.
  • compose.yml determines service names and which ports are bound to your host. By default front=8080 back=5006.
  • Once up navigate to http://localhost:8080/.
  • docker-compose stop stops all services

Frontend - React

There's also a React JavaScript web app that has +3 more features than the vue app.

  • shows all available data on one site/table
  • you can sort the data by clicking the header name, if you want see what ticker had the most 5d Change for example
  • the tickername is a direct link to the tradeview overview site for the ticker if you find a ticker interesting and want to analyse it

Local usage

Start the local server. This server will generate the csv files if they don't already exist.

cd back
python wsgi.py

Then, launch the client

cd react-front
cp .env.example .env
npm install
npm start

You can change the env variables if you need to

No Docker solution for the react app.


Ticker Symbol API - EOD Historical Data

Included for potential future use is a csv file that contains all the listed ticker symbols for stocks, ETFs, and mutual funds (~50,000 tickers). This was retrieved from https://eodhistoricaldata.com/. You can register for a free api key and get up to 20 api calls every 24 hours.

To retrieve a csv of all USA ticker symbols, use the following:

https://eodhistoricaldata.com/api/exchange-symbol-list/US?api_token={YOUR_API_KEY}

Contribution

I would love to see more work done on this, I think this could be something very useful at some point. All contributions are welcome. Go ahead and open a PR.

  • Join the Discord to discuss development and suggestions.

To Do

See this page.

Suggestions are appreciated.

Donation

If you like what I am doing, consider buying me a coffee this helps me give more time to this project and improve.

Buy Me A Coffee

Sponsors

TickerHype: https://tickerhype.com/


If you decide to use this anywhere please give a credit to @abbasmdj on twitter, also If you like my work, check out other projects on my Github and my personal blog.

reddit-stock-trends's People

Contributors

boboman-1 avatar brunobarros avatar caticoa3 avatar cpapadopoulos-signal avatar cstanford avatar cypherius17 avatar denbergvanthijs avatar gamcoh avatar harshcasper avatar iam-abbas avatar ian-myers avatar jonaswang avatar jtrain184 avatar matthew11j avatar mpagels avatar nighkali avatar pcheng17 avatar s0meguy1 avatar sicxnull avatar skullzarmy avatar tyler-barham avatar valoriss avatar vitosamson avatar vmuriart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

reddit-stock-trends's Issues

Add more volume information

Hi,
Is it possible to add in the results more information on trading volume?

Today volume in yahoo summary.

and in statistics
Avg Vol (3 month)
Avg Vol (10 day)
Shares Outstanding
Float

Thanks

404 - GET http://SERVER_IP_ADRESS:8080/get-basic-data?page=1

image

I'm trying to deploy with docker, but when I visit the site by 8080 port, it shows nothing, chrome debug message as above.

I configured praw.ini just like below:

[ClientSecrets]
client_id=a_14_character_string_from_reddit
client_secret=a_27_character_string_from_reddit
user_agent=my_app_name

The backend and frontend container built well and running healthy, docker logs show no error message.

I found after I deploy the service by docker-compose up -d, there is two new files generated under back/data folder: 2021-04-06_financial_df.csv and 2021-04-06_tick_df.csv.

Stock prediction tools

I saw that you liked my work, I like that you have looked at it https://github.com/Leci37/LecTrade/tree/develop

My name is Luis, I'm a big-data machine-learning developer, I'm a fan of your work, and I usually check your updates.

I was afraid that my savings would be eaten by inflation. I have created a powerful tool that based on past technical patterns (volatility, moving averages, statistics, trends, candlesticks, support and resistance, stock index indicators).
All the ones you know (RSI, MACD, STOCH, Bolinger Bands, SMA, DEMARK, Japanese candlesticks, ichimoku, fibonacci, williansR, balance of power, murrey math, etc) and more than 200 others.

The tool creates prediction models of correct trading points (buy signal and sell signal, every stock is good traded in time and direction).
For this I have used big data tools like pandas python, stock technical patterns market libraries like: tablib, TAcharts ,pandas_ta... For data collection and calculation.
And powerful machine-learning libraries such as: Sklearn.RandomForest , Sklearn.GradientBoosting, XGBoost, Google TensorFlow and Google TensorFlow LSTM.

With the models trained with the selection of the best technical indicators, the tool is able to predict trading points (where to buy, where to sell) and send real-time alerts to Telegram or Mail. The points are calculated based on the learning of the correct trading points of the last 2 years (including the change to bear market after the rate hike).

I think it could be useful to you, to improve, I would like to share it with you, and if you are interested in improving and collaborating we could, who knows how to make beautiful things.

Thank you for your time
I'm sorry to contact you here ,by issues, I don't know how I would be able to do it.
https://github.com/Leci37/stocks-Machine-learning-RealTime-telegram/discussions

Remove new users in the webscrape

Considering adding a post.author during the webscraper and dumping those names into a filter. Praw can snag the userinfo and test to see if they have been active for over a certain time, might dump out the bots. We can keep the user info in a cache. Since we are talking about thousands of comments to search through, we can pass these off to a separate thread to build a blacklist of commentors to compare against the main thread. This would allow them first pass through but after they are checked and rejected, future checks would hit on the blacklist. So 1 bot that spams a random pump and dump would have low volume and not show up hopefully. Let me know if its worth the effort.

yfinance_analysis not working

Thanks for creating this.
It looks like everything is working properly, however when I run the yfinance_analysis.py script, nothing happens.
The data from ticker_counts.py imports no problem.

Add visualisations

For now, I think that we need to keep it in the back-end section for monitoring.
When a front/web app will be build (maybe) then we'll integrate it.

Speed up execution suggestion.

Cache the calls to Yahoo Finance to speed up execution.

You should replace all the calls to yf.Ticker(symbol) with this function:

_symbols = {}
def yfTicker(symbol):
  if(symbol in _symbols):
    return _symbols[symbol]
  _symbols[symbol] = yf.Ticker(symbol)
  return _symbols[symbol]

Additionally when using the notebook you can just restart the kernel or run _symbols = {} again to clear the caching.

Why?

Because every call is being executed when yf.Ticker(symbol) is called, it's a ton of redundant calls to YF especially when your calling the same ticker repeatedly.

Additional possible speedup would to overload the history functions in a similar manner as above as well.

urllib.error.HTTPError: HTTP Error 503: Service Unavailable

Running yfinance_analysis.py results in the following error, can someone help me please?

Traceback (most recent call last):
  File "/Users/maxrugen/Developer/Reddit-Stock-Trends/back/yfinance_analysis.py", line 71, in <module>
    analyzer.analyze()
  File "/Users/maxrugen/Developer/Reddit-Stock-Trends/back/yfinance_analysis.py", line 24, in analyze
    df_best[dataColumns] = df_best.Ticker.apply(self.get_ticker_info)
  File "/usr/local/lib/python3.9/site-packages/pandas/core/series.py", line 4135, in apply
    mapped = lib.map_infer(values, f, convert=convert_dtype)
  File "pandas/_libs/lib.pyx", line 2467, in pandas._libs.lib.map_infer
  File "/Users/maxrugen/Developer/Reddit-Stock-Trends/back/yfinance_analysis.py", line 46, in get_ticker_info
    info = yf.Ticker(ticker).info
  File "/usr/local/lib/python3.9/site-packages/yfinance/ticker.py", line 138, in info
    return self.get_info()
  File "/usr/local/lib/python3.9/site-packages/yfinance/base.py", line 446, in get_info
    self._get_fundamentals(proxy)
  File "/usr/local/lib/python3.9/site-packages/yfinance/base.py", line 285, in _get_fundamentals
    holders = _pd.read_html(url)
  File "/usr/local/lib/python3.9/site-packages/pandas/util/_decorators.py", line 299, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/pandas/io/html.py", line 1085, in read_html
    return _parse(
  File "/usr/local/lib/python3.9/site-packages/pandas/io/html.py", line 893, in _parse
    tables = p.parse_tables()
  File "/usr/local/lib/python3.9/site-packages/pandas/io/html.py", line 213, in parse_tables
    tables = self._parse_tables(self._build_doc(), self.match, self.attrs)
  File "/usr/local/lib/python3.9/site-packages/pandas/io/html.py", line 732, in _build_doc
    raise e
  File "/usr/local/lib/python3.9/site-packages/pandas/io/html.py", line 713, in _build_doc
    with urlopen(self.io) as f:
  File "/usr/local/lib/python3.9/site-packages/pandas/io/common.py", line 195, in urlopen
    return urllib.request.urlopen(*args, **kwargs)
  File "/usr/local/Cellar/[email protected]/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 214, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/local/Cellar/[email protected]/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 523, in open
    response = meth(req, response)
  File "/usr/local/Cellar/[email protected]/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 632, in http_response
    response = self.parent.error(
  File "/usr/local/Cellar/[email protected]/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 561, in error
    return self._call_chain(*args)
  File "/usr/local/Cellar/[email protected]/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 494, in _call_chain
    result = func(*args)
  File "/usr/local/Cellar/[email protected]/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 641, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 503: Service Unavailable

Create Portfolio data structure and add basic comparison logic

Portfolio Support

Users may want to do some due diligence and select a symbols to research. Give user a convenient way to incorporate and track tickers they are interested in within stocktrend utility. Tell the user when tickers they are tracking are added or removed from scrape_latest.

Implementation

In progress

Utilizing similar config pattern as main_utils, allow users to track specific tickers in groupings called portfolios. Persist in an .ini file. Use latest scraper data to update a system-level Portfolio.

note Try to make Portfolios optional, do not couple with current scraping if possible.

Minimum-Viable-Product Acceptance Criteria:

  • Able to CRUD Portfolios
  • Portfolio comparison logic ("diffs"/"deltas")
  • Print to Console when scrape_latest adds or removes tickers that were previously in the scrape_latest portfolio
  • Print to Console when scrape_latest adds or removes tickers that existed in user's portfolios, including scrape_latest

Enhancement to #56 - Portfolios of Interesting People

Portfolios Of Interesting People

Users' may desire to track popular, or machine-learned as reliable, ticker-mentioners. Leverage work done for #56 and hook into reddit result-set to update portfolios labeled with userIds from the scrape-source (praw/reddit)

Undecided on the name, needs feedback ๐Ÿ˜‚ -- Portfolios of Interesting People - POIPeses ๐Ÿฌ

Notes

  • Every time a person of interest mentions a stock, add to the POI

portfolio1

portfolio2

portfolio3

Using praw.ini

Hey! Have you thought about using a top-level praw.ini to pass in your client id, client secret, and user_agent? See here for its documentation.

Having a section in the praw.ini file that contains the following

[ScraperBot]
client_id = ABCDE
client_secret = 12345
user_agent = ScraperBot

would allow initialization of the Reddit object to simply be reddit = praw.Reddit('ScraperBot'). Plus, it would remove the current need for dotenv.

urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1076)>

Problem

I run into this weird error after I followed the usage template and came to the Now run yfinance_analysis.py part

urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1076)>

I'm using a Mac 10.15.3
I tried with python 3.9. & python 3.7

Solution

I found a solution, so I will put this issue here, if someone runs into the same trouble.

Stackoverflow Solution

Tl;dr

If you're using macOS go to Macintosh HD > Applications > Python3.6 folder (or whatever version of python you're using) > double click on "Install Certificates.command" file. :D

oh I don't have this Install Certificates.command file on my mac...
Stackoverflow Solution for that

Tl:dr

It seems that, for some reason, Brew has not run the Install Certificates.command that comes in the Python3 bundle for Mac. The solution to this issue is to run the following script (copied from Install Certificates.command) after brew install python3:

# for the ssl module.  Uses the certificates provided by the certifi package:
#       https://pypi.python.org/pypi/certifi

import os
import os.path
import ssl
import stat
import subprocess
import sys

STAT_0o775 = ( stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR
             | stat.S_IRGRP | stat.S_IWGRP | stat.S_IXGRP
             | stat.S_IROTH |                stat.S_IXOTH )


def main():
    openssl_dir, openssl_cafile = os.path.split(
        ssl.get_default_verify_paths().openssl_cafile)

    print(" -- pip install --upgrade certifi")
    subprocess.check_call([sys.executable,
        "-E", "-s", "-m", "pip", "install", "--upgrade", "certifi"])

    import certifi

    # change working directory to the default SSL directory
    os.chdir(openssl_dir)
    relpath_to_certifi_cafile = os.path.relpath(certifi.where())
    print(" -- removing any existing file or link")
    try:
        os.remove(openssl_cafile)
    except FileNotFoundError:
        pass
    print(" -- creating symlink to certifi certificate bundle")
    os.symlink(relpath_to_certifi_cafile, openssl_cafile)
    print(" -- setting permissions")
    os.chmod(openssl_cafile, STAT_0o775)
    print(" -- update complete")

if __name__ == '__main__':
    main()

After that I could run the yfinance_yahoo file successfully and could follow the guide to the end.

prawcore.exceptions.ResponseException: received 401 HTTP response

**Hi, I follow the instruction to get the reddit API credentials

However, I still have the following error.

Could you please help me why I get the error below?**

Version 7.0.0 of praw is outdated. Version 7.1.4 was released 3 days ago. Selecting relevant data from webscraper: 0% 0/2000 [00:00<?, ?it/s] Traceback (most recent call last): File "ticker_counts.py", line 45, in <module> post.total_awards_received] for post in tqdm(new_bets, desc="Selecting relevant data from webscraper", total=WEBSCRAPER_LIMIT)] File "ticker_counts.py", line 40, in <listcomp> posts = [[post.id, File "/usr/local/lib/python3.6/dist-packages/tqdm/std.py", line 1166, in __iter__ for obj in iterable: File "/usr/local/lib/python3.6/dist-packages/praw/models/listing/generator.py", line 61, in __next__ self._next_batch() File "/usr/local/lib/python3.6/dist-packages/praw/models/listing/generator.py", line 71, in _next_batch self._listing = self._reddit.get(self.url, params=self.params) File "/usr/local/lib/python3.6/dist-packages/praw/reddit.py", line 490, in get return self._objectify_request(method="GET", params=params, path=path) File "/usr/local/lib/python3.6/dist-packages/praw/reddit.py", line 574, in _objectify_request data=data, files=files, method=method, params=params, path=path File "/usr/local/lib/python3.6/dist-packages/praw/reddit.py", line 732, in request timeout=self.config.timeout, File "/usr/local/lib/python3.6/dist-packages/prawcore/sessions.py", line 339, in request url=url, File "/usr/local/lib/python3.6/dist-packages/prawcore/sessions.py", line 235, in _request_with_retries url, File "/usr/local/lib/python3.6/dist-packages/prawcore/sessions.py", line 195, in _make_request timeout=timeout, File "/usr/local/lib/python3.6/dist-packages/prawcore/rate_limit.py", line 35, in call kwargs["headers"] = set_header_callback() File "/usr/local/lib/python3.6/dist-packages/prawcore/sessions.py", line 282, in _set_header_callback self._authorizer.refresh() File "/usr/local/lib/python3.6/dist-packages/prawcore/auth.py", line 325, in refresh self._request_token(grant_type="client_credentials") File "/usr/local/lib/python3.6/dist-packages/prawcore/auth.py", line 153, in _request_token response = self._authenticator._post(url, **data) File "/usr/local/lib/python3.6/dist-packages/prawcore/auth.py", line 36, in _post raise ResponseException(response) prawcore.exceptions.ResponseException: received 401 HTTP response

Google trends colab:ERROR: Could not find a version that satisfies the requirement numpy==1.20.1

Hi,
I am trying to use the code in google colab. Some error is displayed.
Any idea why?

!pip install -r requirements.txt

Requirement already satisfied: certifi==2020.12.5 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 1)) (2020.12.5)
Collecting chardet==4.0.0
Downloading https://files.pythonhosted.org/packages/19/c7/fa589626997dd07bd87d9269342ccb74b1720384a4d739a1872bd84fbe68/chardet-4.0.0-py2.py3-none-any.whl (178kB)
|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 184kB 4.9MB/s
Collecting configparser==5.0.1
Downloading https://files.pythonhosted.org/packages/08/b2/ef713e0e67f6e7ec7d59aea3ee78d05b39c15930057e724cc6d362a8c3bb/configparser-5.0.1-py3-none-any.whl
Requirement already satisfied: idna==2.10 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 4)) (2.10)
Collecting lxml==4.6.2
Downloading https://files.pythonhosted.org/packages/bd/78/56a7c88a57d0d14945472535d0df9fb4bbad7d34ede658ec7961635c790e/lxml-4.6.2-cp36-cp36m-manylinux1_x86_64.whl (5.5MB)
|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5.5MB 7.9MB/s
Requirement already satisfied: multitasking==0.0.9 in /usr/local/lib/python3.6/dist-packages (from -r requirements.txt (line 6)) (0.0.9)
ERROR: Could not find a version that satisfies the requirement numpy==1.20.1 (from -r requirements.txt (line 7)) (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0b3, 1.11.0rc1, 1.11.0rc2, 1.11.0, 1.11.1rc1, 1.11.1, 1.11.2rc1, 1.11.2, 1.11.3, 1.12.0b1, 1.12.0rc1, 1.12.0rc2, 1.12.0, 1.12.1rc1, 1.12.1, 1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0rc1, 1.15.0rc2, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0rc1, 1.16.0rc2, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0rc1, 1.17.0rc2, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0rc1, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0rc1, 1.19.0rc2, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5)
ERROR: No matching distribution found for numpy==1.20.1 (from -r requirements.txt (line 7))

PRAW details

A suggestion; Store your details (client-id, client-secret of praw) as variables in a secrets.py file and import the variables from this .py file at the start of your notebook.

Then in your .gitignore you can add this secrets.py file such that it will not be pushed to github.

Currently everyone pulling your code is using your account.

tqdm.std.TqdmKeyError

Hello!

Love the idea! Just started playing around with it and ran into an issue. After setting up requirements.txt and doing my first run, I am getting this error:

Traceback (most recent call last):
  File "./ticker_counts.py", line 48, in <module>
    post.total_awards_received] for post in tqdm(new_bets, desc="Selecting relevant data from webscraper", limit=WEBSCRAPER_LIMIT)]
  File "/usr/local/lib/python3.7/dist-packages/tqdm/std.py", line 1007, in __init__
    TqdmKeyError("Unknown argument(s): " + str(kwargs)))
tqdm.std.TqdmKeyError: "Unknown argument(s): {'limit': 2000}"

I do not know anything about tqdm and could not figure out the issue, any ideas @Denbergvanthijs? I only ping you because tqdm was added to the standalone scripts.

Thanks!

Activation of DeepSource

Hi ๐Ÿ‘‹

One of my Pull Requests around fixing Code Quality Issues with DeepSource was merged here: #67

I'd just like to inform you that the issues fixed here were detected by running DeepSource analysis on the repo. If you like, you can activate analysis for your repository to detect such code quality issues/bug risks on the fly for every change made. You can also use the Autofix feature to fix them with one click.

The .deepsource.toml file you merged will only take effect if you activate analysis for this repo.

Here's what you can do if you wish to activate DeepSource to continuously analyze your repository:

  • Sign up on DeepSource and activate analysis for this repository.
  • Create .deepsource.toml configuration which you can use to configure your analysis settings (My PR already added that, but feel free to edit it anytime).
  • Track/Check analysis here.

If you have any doubts or questions, you can check out the docs, or feel free to reach out :)

ValueError: No tables found

af82387

I downloaded the above hash 3 days ago. Set it up and ran it twice once a day with no issue.

On the 3rd day "html5lib not found" I did pip install html5lib. The ticker_counts.py runs fine but

yfinance_analysis.py has a bunch of line errors ending in ValueError: No tables found.

I have reverted to 59cd6d0

Pushshift API

Been working on something similar, will likely contribute if I can set some time aside, however;

Have you seen/would you consider using the Pushshift API to get reddit data? There's more data available as you can pull from any date.

https://pypi.org/project/pushshift.py/

Error: UTF-8 codec cant decode byter 0xff

Hey im a total beginner at python and programming but im getting this error message when running ticker_counts:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte

The whole log looks like this:

python3 ticker_counts.py
Traceback (most recent call last):
File "/Users/filipalvgren/Desktop/Reddit-Stock-Trends/back/ticker_counts.py", line 102, in
ticket.get_data()
File "/Users/filipalvgren/Desktop/Reddit-Stock-Trends/back/ticker_counts.py", line 49, in get_data
reddit = praw.Reddit('ClientSecrets')
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/praw/reddit.py", line 188, in init
self.config = Config(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/praw/config.py", line 79, in init
self._load_config(config_interpolation)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/praw/config.py", line 60, in _load_config
config.read(locations)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/configparser.py", line 697, in read
self._read(fp, filename)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/configparser.py", line 1017, in _read
for lineno, line in enumerate(fp, start=1):
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte

Any help on solving this issue would be much appriciated!

Progress bar during yfinance analysis

There is a nice progress bar in ticker_counts.py but there is no such thing in yfinance_analysis.py. It can be easily added when method is being applied to pandas dataset. The analysis takes a while and I think it would be more user friendly to have a progress bar.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.