Coder Social home page Coder Social logo

espin086 / newswavemetrics Goto Github PK

View Code? Open in Web Editor NEW
0.0 0.0 0.0 3.34 MB

is a powerful tool for analyzing news sentiment on both national and local stories, allowing users to correlate these stories with their own uploaded metrics, starting with stock market price data. Stay ahead of the curve and make informed decisions with SentimentSync.

License: MIT License

Python 100.00%

newswavemetrics's Introduction

Hi 👋, I'm JJ

Data Scientist and Machine Learning Engineer

espin086

💡

My Portfolio of Projects

⬇️ ⬇️ ⬇️

💼 GPT-JobHunter: Text Analysis, APIs, SQL, User Input, Machine Learning, Generative AI


Analyzes job postings and provides personalized recommendations to job seekers for improving their resumes.

alt text

Key Components:

💰 NewsWaveMetrics: APIs, SQL, Python, Text Analysis, Time Series Analysis, etc.


NewsWageMetrics is a powerful tool for analyzing news sentiment, allowing users to correlate these stories with stock market price data.

alt text

Key Components:

🧠 AutoLearn: Automation, Machine Learning, Data Visualization, Model Training/Tuning/Inference


AutoLearn is a powerful tool for data scientists that automates the process of exploratory data analysis (EDA) and machine learning model training.

alt text

Key Components:

💥 EmoTrack: AWS, Computer Vision, Real-Time Processing, SQL


A real-time emotion detection and tracking application using webcam input. Analyze and visualize your emotional trends over time with interactive charts.

alt text

Key Components:

Languages and Tools:


aws azure docker gcp git linux opencv pandas python pytorch scikit_learn seaborn sqlite tensorflow

My Github Activity:


espin086

espin086

 espin086

espin086

newswavemetrics's People

Contributors

espin086 avatar zaibys avatar

Watchers

 avatar

newswavemetrics's Issues

Implement Sentiment Analysis for News Text Using NLTK and Update Database

Summary
We aim to introduce a feature that performs sentiment analysis on the text of news articles related to stocks and interpretability in machine learning, stored in our SQLite database. This analysis should utilize the NLTK (Natural Language Toolkit) library to determine the sentiment of each article, and the results should be updated in the database accordingly.

Detailed Description
By analyzing the sentiment of news text, we can provide users with insights into the general tone (positive, neutral, or negative) of discussions and developments in the field of AI safety. This feature will enhance the application's ability to deliver valuable content to users, enabling them to gauge the sentiment around various company and economic topics.

Requirements:

  • Sentiment Analysis: Use the NLTK library to perform sentiment analysis on the full text of each news article stored in the news table of our SQLite database.

  • Database Update: Extend the news table schema to include a new column named sentiment, which will store the sentiment analysis results for each article. The sentiment values should be categorized as "Positive", "Neutral", or "Negative”.

  • Batch Processing: Implement a batch processing system that can analyze and update the sentiment of articles already stored in the database, as well as integrate this analysis into the process of storing new articles.

  • Error Handling: Ensure robust error handling for the sentiment analysis process and database updates to manage issues such as analysis failures or database connectivity problems.

Documentation: Provide detailed documentation on the sentiment analysis feature, including how to install and
configure any necessary NLTK components, and how to run the batch processing for existing and new articles.

Acceptance Criteria

  • The sentiment analysis of news text using the NLTK library is accurately performed for each article in the news table.
  • The database is updated to include the sentiment analysis results for each article, with the sentiment accurately reflecting the tone of the text.
  • The system can efficiently process and update the sentiment for both new and existing articles in the database.
    Comprehensive error handling and documentation are in place to support the feature's reliability and usability.
  • The sentiment of a news topic can be visualized

Build a model training dataset for stock prediction data

We need to create a comprehensive dataset for training our stock prediction model. This dataset should include historical stock prices, trading volumes, market indicators, and any other relevant data that can help improve the accuracy of our predictions. The dataset should be well-organized and cleaned to ensure the model's effectiveness.

Implement Moving Average Cross-Over Stock Trading Strategy in Streamlit App

Create a Python script to implement a simple moving average cross-over stock trading strategy using data from a SQLite database. The script should calculate the moving averages, generate buy/sell signals based on the cross-over.

Here is video showing how to do this: https://www.youtube.com/watch?v=PUk5E8G1r44

I would like to create a simple moving average cross-over stock trading strategy that will be integrated into a Streamlit application. The data for this strategy is stored in a SQLite database.

Requirements:

  1. Implement a moving average cross-over strategy using the data from the SQLite database.
  2. Integrate this strategy into a Streamlit application for easy visualization and interaction.
  3. Include data visualization to show the stock prices, moving averages, and buy/sell signals.
  4. Ensure the application is user-friendly and provides clear insights into the trading strategy.
  5. The code needs to be modular and all parameters (aka window) in the SMA function should be selected by the user before running analysis.

Additional Information:

  • The SQLite database contains historical stock price data that will be used for the moving average calculations, you will need to pick a ticker to analyze in the streamlit UI.
  • The moving average cross-over strategy should generate buy signals when the short-term moving average crosses above the long-term moving average, and sell signals when the short-term moving average crosses below the long-term moving average.

Implement Feature to Fetch Economic Data from FRED API and Store in SQLite Database

Summary
We require the development of a feature to fetch economic data using the FRED (Federal Reserve Economic Data) API and store this data in a SQLite database. The data will be stored in a new table named fred and should include a range of economic indicators with varying frequencies (daily, weekly, or monthly).

Detailed Description
This feature aims to automate the retrieval of economic indicators from the FRED API, focusing on metrics such as Exchange Rates, Treasury yields, the Fed Funds Rate, CPI, GDP, and more. The data collected will support financial analysis, economic research, and modeling within our application.

Requirements:

Data Fields: Fetch the following economic indicators, prioritizing daily data, but falling back to weekly or monthly frequencies if daily data is not available:
Exchange Rates
2 Year Treasury
10 Year Treasury
Fed Funds Rate
CPI (Consumer Price Index)
GDP (Gross Domestic Product)
Industrial Production
Employment
Consumer Sentiment
PPI (Producer Price Index)
Any additional economic LEADING indicators listed on the Wikipedia page for Economic Indicators.
Database Schema: Create a new table named fred in our existing SQLite database. The table should have columns for each of the metrics listed above, along with columns for the date and frequency of the data.
FRED API Integration: Utilize the FRED API to fetch the required economic data. Ensure to handle API authentication, rate limits, and data normalization.
Data Update Frequency: Implement a mechanism to regularly update the database with the latest available data for each indicator, considering their respective frequencies.
Error Handling: Implement comprehensive error handling for API connectivity issues, data parsing errors, and database insertion failures.
Documentation: Provide detailed documentation on how to configure the FRED API (including obtaining and setting up API keys) and how to operate the data fetching and storage feature.
Acceptance Criteria
The system can fetch and store the specified economic indicators from the FRED API into the fred table in the SQLite database.
Data is updated according to its availability frequency, with fallbacks to less frequent data if daily data is not available.
Proper error handling and logging are in place for troubleshooting.
Documentation is complete and clear, enabling easy setup and maintenance of the feature.

Implement Feature to Fetch and Store Stock Data in SQLite Database

Summary
We need to implement a new feature in our project that involves fetching stock data from a financial data API and storing it into a SQLite database. The data should include the following fields for each stock: Date, Ticker, Open Price, High Price, Low Price, Closing Price, and Volume.

Detailed Description
The goal of this feature is to have an automated process that can retrieve daily stock data for a predefined list of tickers and store this information in a SQLite database. This will allow us to perform historical data analysis and backtesting of trading strategies within our application.

Requirements:

Data Fields: The stock data should include the following information for each record:
Date: The date of the trading session.
Ticker: The stock symbol.
Open Price: The price at which the stock first traded upon the opening of the exchange.
High Price: The highest price at which the stock traded during the trading session.
Low Price: The lowest price at which the stock traded during the trading session.
Closing Price: The price at which the stock last traded upon the close of the exchange.
Volume: The number of shares or contracts traded in a security or an entire market during a given period.
Database Schema: The SQLite database should have a table named stock_data with columns corresponding to the data fields mentioned above.

Data Source: Please identify and integrate a reliable financial data API that provides the necessary stock data. Consider using APIs like Alpha Vantage, Yahoo Finance, or another free API that meets our data requirements.
Data Update Frequency: The feature should be capable of fetching and updating the stock data in the database on a daily basis.

Error Handling: Implement error handling for cases where the API is unavailable or returns incomplete data.
Documentation: Include documentation on how to set up and use this feature, including any necessary API keys or configurations.

Acceptance Criteria

The feature fetches daily stock data for the specified tickers.
The stock data is accurately stored in the SQLite database according to the schema.
The feature handles errors gracefully and logs any issues encountered during the data fetching process.
Documentation is provided for setting up and using the feature.
Please provide an estimate for the time required to implement this feature and any potential challenges or considerations we should be aware of.

Implement Feature to Fetch and Store News Data in SQLite Database

Summary

Here is the RapidAPI we will use to get news data: https://rapidapi.com/rphrp1985/api/newsnow/

We need to develop a feature that automatically fetches news articles on ANY Topic Speficied with a specific focus on interpretability in machine learning, and stores them in our SQLite database. This feature should capture comprehensive details about the articles, including titles, images, video links, publication dates, short descriptions, full text, and source URLs.

Requirements:

  • Data to Fetch and Store: For each news article related to AI safety and interpretability in machine learning, the following information should be captured:
    • Title: The title of the article.

    • Top Image: URL to the main image of the article.

    • Videos: Any related video content URLs.

    • URL: The direct link to the news article.

    • Date: The publication date of the article.

    • Short Description: A brief summary of the article.

    • Text: The full text of the article.

    • Source: The name of the publication or source website.

  • Database Schema: The SQLite database should have a table named ai_safety_news specifically designed to store the fetched news data. The schema should include columns for each of the data points listed above.
  • Update Frequency: The feature should be capable of fetching new articles hourly.
  • Error Handling: Implement robust error handling to manage issues such as API rate limits, data parsing errors, and database insertion failures.
  • Documentation: Provide detailed documentation on how to configure any necessary API keys or credentials and instructions on how the news fetching feature operates.

Acceptance Criteria

  • The system successfully fetches and stores news articles based on user input and interpretability in machine learning in the news table in the SQLite database.
  • The feature updates the database with new articles as they are published.
  • Radial Button for News Analysis: One can search for news, the news are displayed as a table
  • Error handling is in place to ensure the system's stability and reliability.
  • Comprehensive documentation is provided for configuring and using the feature.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.