Coder Social home page Coder Social logo

espin086 / gpt-jobhunter Goto Github PK

View Code? Open in Web Editor NEW
49.0 49.0 14.0 76.81 MB

AI-powered job analysis and resume coaching tool using GPT. Analyzes job postings and provides personalized recommendations to job seekers for improving their resumes.

License: MIT License

Python 98.19% Makefile 1.03% Dockerfile 0.63% Shell 0.15%
hacktoberfest

gpt-jobhunter's Introduction

Hi ๐Ÿ‘‹, I'm JJ

Data Scientist and Machine Learning Engineer

espin086

๐Ÿ’ก

My Portfolio of Projects

โฌ‡๏ธ โฌ‡๏ธ โฌ‡๏ธ

๐Ÿ’ผ GPT-JobHunter: Text Analysis, APIs, SQL, User Input, Machine Learning, Generative AI


Analyzes job postings and provides personalized recommendations to job seekers for improving their resumes.

alt text

Key Components:

๐Ÿ’ฐ NewsWaveMetrics: APIs, SQL, Python, Text Analysis, Time Series Analysis, etc.


NewsWageMetrics is a powerful tool for analyzing news sentiment, allowing users to correlate these stories with stock market price data.

alt text

Key Components:

๐Ÿง  AutoLearn: Automation, Machine Learning, Data Visualization, Model Training/Tuning/Inference


AutoLearn is a powerful tool for data scientists that automates the process of exploratory data analysis (EDA) and machine learning model training.

alt text

Key Components:

๐Ÿ’ฅ EmoTrack: AWS, Computer Vision, Real-Time Processing, SQL


A real-time emotion detection and tracking application using webcam input. Analyze and visualize your emotional trends over time with interactive charts.

alt text

Key Components:

Languages and Tools:


aws azure docker gcp git linux opencv pandas python pytorch scikit_learn seaborn sqlite tensorflow

My Github Activity:


espin086

espin086

ย espin086

espin086

gpt-jobhunter's People

Contributors

0xchrisw avatar atharvajadhav7 avatar espin086 avatar zaibys avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

gpt-jobhunter's Issues

Improve User Experience by Combining Pipeline Execution and SQLite Query Results in a Single Button Click

Currently, our application requires users to perform two separate actions to execute a pipeline and view the results. This two-step process can be time-consuming and negatively impacts the user experience. To enhance usability, we propose modifying the front-end streamlit code in the main.py function to consolidate these actions into a single button click.

Objective:
The objective of this issue is to streamline the user experience by implementing a single button that triggers both the pipeline execution and the display of SQLite query results.

Proposed Solution:

  1. Update the front-end streamlit code in the main.py function to include a single button labeled "Run Pipeline and Show Results".
  2. When the user clicks on this button, the application should initiate the pipeline execution process.
  3. Once the pipeline execution is complete, the application should automatically retrieve the results of the SQLite query.
  4. The retrieved results should be displayed to the user in a clear and intuitive manner.

Expected Outcome:
By implementing this change, users will no longer need to perform two separate actions, resulting in a more efficient and seamless user experience. The consolidated button will simplify the process and reduce the time required to execute the pipeline and view the query results.

Additional Information:

  • The application is built using streamlit and utilizes SQLite for querying data.

Please view the CONTRIBUTING.md for instructions on how to contribute and take on this issue.

Thank you!

Add Code Coverage Metrics to Makefile using pytest-cov module

We need to implement code coverage metrics in our project to ensure the quality and effectiveness of our tests. To achieve this, we should integrate the pytest-cov module into our Makefile. This will allow us to generate code coverage reports both in the terminal and as an HTML file that can be saved to GitHub.

Steps to implement:

  1. Install pytest-cov module:

    • Add pytest-cov to the project's requirements.txt file or install it using pip:
      pip install pytest-cov
      
  2. a - make sure to update the requirements.txt with the version of pytest-cov that is working

  3. Update the Makefile:

    • Open the Makefile in the project's root directory.
    • Add a new target, e.g., coverage, to run the tests with code coverage:
      coverage:
          pytest --cov=<project_directory> --cov-report term --cov-report html
      
      Replace <project_directory> with the actual directory containing the project's code.
  4. Run the code coverage:

    • In the terminal, navigate to the project's root directory.
    • Execute the following command to run the tests with code coverage:
      make coverage
      
      
      

NOTE: before submitting pull request make sure you run make check and that all test are passing, thank you!

menu.py - add an option called "0. Set Up" which ask user for API Keys for OpenAI then stores them as environment variables

the openai_models.py won't work without the keys and organizations set up in the user in the menu, if you don't do this you will get an error like the one below:

Traceback (most recent call last):
  File "/Users/jjespinoza/Documents/jobhunter/ui/../jobhunter/utils/job_title_generator.py", line 29, in <module>
    job_titles = get_top_job_titles(resume_text)
  File "/Users/jjespinoza/Documents/jobhunter/ui/../jobhunter/utils/job_title_generator.py", line 12, in get_top_job_titles
    message = generate_completion("text-davinci-003", prompt, 0.7, 1000)
  File "/Users/jjespinoza/Documents/jobhunter/jobhunter/utils/openai_models.py", line 12, in generate_completion
    completion = openai.Completion.create(
  File "/Users/jjespinoza/Library/Python/3.9/lib/python/site-packages/openai/api_resources/completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/Users/jjespinoza/Library/Python/3.9/lib/python/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 149, in create
    ) = cls.__prepare_create_request(
  File "/Users/jjespinoza/Library/Python/3.9/lib/python/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 106, in __prepare_create_request
    requestor = api_requestor.APIRequestor(
  File "/Users/jjespinoza/Library/Python/3.9/lib/python/site-packages/openai/api_requestor.py", line 130, in __init__
    self.api_key = key or util.default_api_key()
  File "/Users/jjespinoza/Library/Python/3.9/lib/python/site-packages/openai/util.py", line 186, in default_api_key
    raise openai.error.AuthenticationError(
openai.error.AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See https://onboard.openai.com for details, or email [email protected] if you have any questions.

[BUG] - extract.py duplicate code

There is duplicate code in the extract.py code that calls the search linkedin jobs rapid api endpoint, this was saved already as a different file so need to delete that duplicate code in extract.py

READMEs Outdated

  • Test deploying in another Ec2 instance
  • Include Menu options for using the tool
  • Add a README in the Utils subfolder where much of the code lives?

Dockerize Application

Need to dockerize application using information from Makefile to make deployment across clouds and locally easier

[BUG] Move config.py file to the root folder and update references

Currently, the config.py file is located in the src folder of our repository. In order to improve the organization and accessibility of our codebase, we propose moving the config.py file to the root folder of the repository.

This enhancement involves updating all the programs that currently reference the config.py file to read from the new location.

Tasks:

  1. Move the config.py file from the src folder to the root folder of the repository.
  2. Identify all the programs that use the config.py file.
  3. Modify each program to read from the new location of the config.py file.
  4. Test the modified programs to ensure they function correctly with the updated file location.

Expected Outcome:
By moving the config.py file to the root folder and updating the references in the programs, we will improve the organization of our codebase and make it easier to locate and manage the configuration file.

[ENHANCEMENT] code only works with Linkedin - check out RapidAPI for a Jobs API across platforms

This is the new API I want to use:

https://rapidapi.com/letscrape-6bRBa3QguO5/api/jsearch/

Here are the steps to update the application:

  1. Update config.py to reference a new table for the updated job search data
  2. Updated SQliteHandler.py - to create a new table based on config metrics
  3. Create new extract.py to get data from new API and save the raw results into the raw data folder
  4. create new transform.py to get transform data and save into the processed folder
  5. create new load.py to load the processed data into the the new database
  6. create a new report.py to query the data to present to the front end.

SQLLiteHandeler.py

There is code spread around the module that manages creation, insertion, and querying of SQLLite queries.

Put all of this code in one cohesive module to make code easier to maintain and debug

Remove dependency on AWS Secrets Manager, find another way to store credentials outside of AWS

here are the places where the secrets manager code is being used:

jjespinoza@JJs-MacBook-Pro jobhunter % grep -r 'aws_secrets_manager'
./jobhunter.egg-info/SOURCES.txt:jobhunter/utils/aws_secrets_manager.py
./jobhunter/utils/search_jsearch_jobs.py:import jobhunter.utils.aws_secrets_manager
./jobhunter/utils/search_jsearch_jobs.py: "X-RapidAPI-Key": aws_secrets_manager.get_secret(
./jobhunter/utils/search_linkedin_jobs.py:import jobhunter.utils.aws_secrets_manager
./jobhunter/utils/search_linkedin_jobs.py: "X-RapidAPI-Key": jobhunter.utils.aws_secrets_manager.get_secret(
./jobhunter/utils/emailer.py:import jobhunter.utils.aws_secrets_manager
./jobhunter/utils/emailer.py: email_sender = aws_secrets_manager.get_secret(
./jobhunter/utils/emailer.py: email_password = aws_secrets_manager.get_secret(
./build/lib/jobhunter/utils/search_jsearch_jobs.py:import jobhunter.utils.aws_secrets_manager
./build/lib/jobhunter/utils/search_jsearch_jobs.py: "X-RapidAPI-Key": aws_secrets_manager.get_secret(
./build/lib/jobhunter/utils/search_linkedin_jobs.py:import jobhunter.utils.aws_secrets_manager
./build/lib/jobhunter/utils/search_linkedin_jobs.py: "X-RapidAPI-Key": jobhunter.utils.aws_secrets_manager.get_secret(
./build/lib/jobhunter/utils/emailer.py:import jobhunter.utils.aws_secrets_manager
./build/lib/jobhunter/utils/emailer.py: email_sender = aws_secrets_manager.get_secret(
./build/lib/jobhunter/utils/emailer.py: email_password = aws_secrets_manager.get_secret(

Add type hinting and static code analysis with mypy to Makefile and GitHub Actions

This GitHub issue aims to enhance the project's code quality by implementing type hinting and sing type hints to the codebase and integrating mypy into the Makefile and GitHub Actions, we can ensure that throughout the project. This will help catch potential type-related bugs early on and improve the overall maintainability and reliability of the codebase.

If you are new to Python Typing, you can watch this video to learn: https://www.youtube.com/watch?v=QORvB-_mbZ0

Enhancement Request - Display GPT-based Job Descriptions as Pipeline RecommendationsEnhancement Request - Display GPT-based Job Descriptions as Pipeline Recommendations

As a user of the job search feature on GitHub, I would like to suggest an enhancement to the user interface (UI) that would display GPT-based job descriptions as recommendations for running the pipeline of job searches.

Currently, the job search feature on GitHub provides users with a list of job postings based on their search criteria. While this is helpful, it would be even more beneficial if the UI could leverage GPT (Generative Pre-trained Transformer) technology to generate job descriptions that align with the user's search preferences.

By incorporating GPT-based job descriptions into the UI, users would have access to more comprehensive and tailored recommendations for their job search pipeline. This would enable them to make more informed decisions about which jobs to pursue and increase their chances of finding the right opportunities.

The proposed enhancement would involve integrating GPT models into the existing job search algorithm. These models would analyze the user's resume to generate job descriptions that closely match their requirements. The UI would then display these GPT-based job descriptions alongside the regular job postings, providing users with a broader range of options to consider.

Ideally, it would be two categories of job descriptions:

  1. Lateral moves
  2. Moving up in title

MUST RETURN JSON OBJECT

FileHandler.py to reduce duplicate code

There is duplicate code all over the ETL process that reads and saves json.

Create a class called FileHandler that combines all of this code and reduces duplication.

Enable CI/CD deployment to PyPI on merge or commit to main branch

As a developer, I would like to request an enhancement to our project's CI/CD pipeline to automatically deploy our project to PyPI whenever a merge or commit happens to the main branch.

Currently, our project is hosted on GitHub and we have a CI/CD pipeline set up using a continuous integration tool. However, there is no deployment process to PyPI.

By enabling automatic deployment to PyPI on merge or commit to the main branch, we can streamline our release process and ensure that the latest version of our project is readily available to our users.

The desired workflow would be as follows:
0. Make the entire project PIP installable locally using setup.py

  1. Whenever a merge or commit occurs on the main branch, the CI/CD pipeline should trigger.
  2. The pipeline should build the project, run tests, and generate the necessary artifacts.
  3. Once the build and tests pass successfully, the pipeline should automatically deploy the project to PyPI.
  4. The deployed package on PyPI should be versioned according to the project's versioning scheme.

This enhancement will save time and effort for our development team, as well as ensure that our users have access to the latest stable version of our project without any manual intervention.

Please consider implementing this enhancement to our CI/CD pipeline. If there are any further details or requirements needed, please let me know.

Resume isn't getting processed into transorm.py as the working directory is different during whole process

I was trying out new version, but still no output
image

I checked errors
there was resume not found error, so I went through all files to debug where that might be coming from
in transform.py where read_resume_text() function is getting called, I tried changing paths, but no help, so I checked where the working directory is while processing this, and found it to be something like this:
image

I think this is the build location and in this directory there is no resume.txt
image

Need to implement it so that it can take resume from the path specified and won't take it as relative from working directory

Also not sure what warning below are due to:
image

maybe data is not getting loaded in my pipeline, not sure why

[ENHANCEMENT] โœจ Enhance Resume Handling and Analysis ๐Ÿ“„๐Ÿ”

Currently, our application handles user resumes by storing them locally in a data file. This approach has limitations and restricts us from performing in-depth analysis and comparisons. We aim to enhance the resume handling process by:

  • Taking users' resumes directly from the Streamlit application.
  • Saving the resumes in an SQLite database.
  • Designing a schema for the new resume table.
  • Updating the front-end code to seamlessly accept user resumes and store them in the database.
  • Modifying the code responsible for resume similarity analysis, which currently reads resumes from local files.
  • Implementing a feature that allows users to select which resume they want to analyze against job listings.

Details:

Current Resume Handling: Resumes are currently stored locally in a data file, which limits our ability to perform dynamic analysis and comparisons.

Proposed Enhancements:

  • Database Table for Resumes: Create a new table in the SQLite database dedicated to storing user resumes. Design the schema for this table.
  • Front-End Integration: Update the front-end code to seamlessly accept user resumes through the Streamlit application and store them in the newly created database table.
  • Resume Similarity Analysis: Modify the code responsible for resume similarity analysis to read resumes from the database rather than local files.
  • User Resume Selection: Implement a user-friendly feature that allows users to select which resume they want to analyze against job listings, enhancing the user experience.

Expected Benefits:

  • Users can upload and manage their resumes directly within the application, making it more user-friendly and efficient. ๐Ÿ“๐Ÿš€
  • The enhanced system will enable more dynamic and sophisticated resume analysis and comparison against job listings. ๐Ÿ”๐Ÿ“ˆ

Proposed Steps:

  • Design the schema for the new resume table in the SQLite database.
  • Update the front-end code to include resume upload functionality.
  • Modify the resume similarity analysis code to work with the new database storage.
  • Implement the resume selection feature to allow users to choose the resume for analysis.
  • Test the entire workflow to ensure proper integration and functionality.

[ENHANCEMENT] Automatic Deployment to Docker Registry on Merge to Main

Description:
As a developer, I want to set up an automated deployment process to my own Docker registry whenever there is a merge to the main branch in my GitHub repository. This will ensure that the latest changes are immediately available for deployment and testing.

To achieve this, I plan to implement the following steps:

  1. Configure a webhook in my GitHub repository to trigger a deployment on merge to the main branch.
  2. Set up a CI/CD pipeline using a tool like Jenkins or GitLab CI/CD to handle the deployment process.
  3. Create a Dockerfile in my repository that defines the necessary steps to build and package the application.
  4. Configure the CI/CD pipeline to build the Docker image using the Dockerfile and push it to my own Docker registry.
  5. Update the deployment script to pull the latest Docker image from the registry and deploy it to the desired environment.

By automating the deployment process, I can ensure that any changes merged into the main branch are quickly and reliably deployed to the production environment, reducing manual effort and minimizing the risk of human error.

Please let me know if you need any further information or assistance with setting up this automated deployment process.

Harden Code: Unit Tests and Functional Tests

The code needs to be hardened further here are some critical tests that need to be added to the test suite:

  1. What happens if the database is deleted? And the code is rerun- will it error out gracefully?

  2. What happens if there is no resume saved locally? Will the testing suite produce an error? Are there asset statements when reading in resume data to ensure all resume.txt data is there?

  3. If the job search is run and there have not been any new jobs for a given title in 3 months, does the front-end streamlit application warn the user that recent jobs weren't uploaded?

The answer to the questions above is NO, so these tests as well as other key tests need to be added to the code base

[ENHANCEMENT] โœจ Improve Pylint GitHub Actions Configuration โœจ

Currently, our Pylint GitHub Actions workflow enforces a strict 100% pass score, which can be overly restrictive and may prevent code contributions that are otherwise acceptable. This issue is raised to improve the Pylint GitHub Actions configuration by allowing a 70% pass score, providing a more reasonable threshold for code quality checks. ๐Ÿ› ๏ธ

Details:

Current GitHub Actions Configuration: [Link to Current Workflow Configuration]

Proposed Enhancement:

Adjust Pylint Pass Score: Modify the GitHub Actions workflow configuration to allow a 70% pass score for Pylint checks instead of the current 100%. This change will provide flexibility for code contributions while still maintaining code quality standards. ๐Ÿ“Š
Expected Benefits:

Code contributions that meet a 70% Pylint pass score will be accepted, promoting a more inclusive development environment. ๐Ÿ™Œ

Developers will have room to improve code quality while avoiding strict rejections based solely on Pylint scores. ๐Ÿ“ˆ

Proposed Steps:

  • Review the existing GitHub Actions configuration for Pylint. ๐Ÿ•ต๏ธโ€โ™‚๏ธ
  • Modify the configuration to set the Pylint pass score threshold to 70%. ๐Ÿ”„
  • Test the updated configuration to ensure that it accurately reflects the desired behavior. โœ”๏ธ
  • Update documentation or guidelines to inform contributors about the new Pylint pass score requirement. ๐Ÿ“–

Additional Information:

  • Setting a 70% pass score allows us to strike a balance between code quality and collaboration, fostering a more inclusive development process. ๐Ÿค
  • Contributors will still be encouraged to improve code quality, but this change acknowledges that perfection may not always be attainable or necessary for all code contributions. ๐Ÿš€

Note: Please ensure that this issue is discussed and aligned with the project's coding standards and quality goals. Additionally, consider using code reviews and other quality assurance measures in conjunction with Pylint to maintain code quality. ๐Ÿง

Create Automated Sphinx Documentation for Python Project

Hello,

I would like to request the creation of Sphinx documentation for our Python project. Sphinx is a powerful tool that can generate high-quality documentation from reStructuredText files.

To contribute to the documentation, please refer to the CONTRIBUTING.md file in our project repository. It contains detailed instructions on how to contribute and make changes to the documentation.

Having comprehensive documentation is crucial for our project's success. It helps users understand how to use our code, contributes to better collaboration, and facilitates the onboarding process for new contributors.

If you are interested in working on this task, please let us know. We would be happy to provide any additional information or guidance you may need.

Thank you for your attention to this matter.

Refactoring Request for transform.py

I would like to request a refactoring of the code in transform.py. I believe that converting the code into a class would greatly simplify its architecture and improve its maintainability. Currently, the code consists of multiple functions that operate on a single data type, a list of JSON files. By creating a class that operates on a list of JSON objects, each of these functions can be transformed into methods, resulting in a more organized and cohesive codebase.

The proposed class structure would allow for better encapsulation and reusability of code. It would also make it easier to manage the state of the data being processed, as the class can maintain the data as an instance variable. Additionally, the class can provide a clear interface for interacting with the data and performing various transformations on it.

I suggest naming the class "DataTransformer" and placing it in a separate file called "data_transformer.py". The class can have the following methods:

  1. __init__(self, data: List[dict]): Initializes the DataTransformer object with the input data.

  2. delete_json_keys(self, *keys): Deletes the specified keys from each JSON object in the data.

  3. drop_variables(self): Drops the variables that are not needed for the analysis from each JSON object in the data.

  4. remove_duplicates(self): Removes duplicate dictionaries from the data.

  5. rename_keys(self, key_map: dict): Renames keys in each JSON object based on a key map.

  6. convert_keys_to_lowercase(self, *keys): Converts the values of the specified keys to lowercase in each JSON object.

  7. add_description_to_json_list(self): Gathers job descriptions from the web and adds them to each JSON object in the data.

  8. extract_salaries(self): Extracts salaries from the job descriptions in each JSON object.

  9. compute_resume_similarity(self, resume_text: str): Computes the similarity between the resume text and the job descriptions in each JSON object.

  10. transform(self): Executes all the transformation methods in the desired order and saves the processed data.

By refactoring the code into a class, it will be easier to manage and extend the functionality in the future. I believe this change will greatly improve the overall structure and readability of the code.

Extra Folders in Test Folder

Not sure why we have extrac folders in the test folder, they may be created programmatically, but they need to be removed, they have no data and we should not be saving data there, or are they part of tests being run?

Screenshot 2023-10-23 at 10 27 44โ€ฏAM

[ENHANCEMENT] - ๐Ÿ“š Enhancement Request - Improve Project Documentation ๐Ÿ“

This issue is raised to enhance the documentation for our project. Improving the documentation is crucial for better project understanding, onboarding, and user experience. This enhancement includes adding screenshots, describing features, and making the documentation more comprehensive and user-friendly. ๐Ÿš€

  1. Screenshots: Include screenshots or visual representations wherever applicable to provide a visual context for the documentation. Screenshots can help users understand the project's interface, settings, and usage. ๐Ÿ–ผ๏ธ
  2. Feature Descriptions: Clearly describe each feature of the project, including its purpose, functionality, and how to use it. This will help users make the most of the project's capabilities. ๐Ÿ“‹
  3. Usage Examples: Provide step-by-step usage examples or use cases that demonstrate how to perform common tasks within the project. This helps users apply the project effectively. ๐Ÿ“
  4. FAQ Section: Add a Frequently Asked Questions (FAQ) section to address common queries and troubleshooting tips. This can reduce user support requests. ๐Ÿค”
  5. Installation Instructions: If applicable, improve the installation instructions by providing detailed setup steps for different environments (e.g., development, production). ๐Ÿ› ๏ธ
  6. API Documentation: If the project includes an API, document the API endpoints, request/response formats, and authentication methods. ๐ŸŒ
  7. Table of Contents: Organize the documentation with a clear and navigable table of contents, making it easy for users to find relevant information quickly. ๐Ÿ“‘
  8. Search Functionality: If the documentation is web-based, consider adding a search feature to allow users to quickly locate specific topics. ๐Ÿ”
  9. Feedback Mechanism: Encourage users to provide feedback or report documentation issues to further improve the documentation. ๐Ÿ“ข

Expected Benefits:

  • Enhanced documentation will improve user onboarding and reduce the learning curve for new users. ๐Ÿ“ˆ
  • Users will have a better understanding of the project's features and capabilities. ๐Ÿง
  • The documentation will serve as a valuable resource for both developers and end-users. ๐Ÿ“–

Proposed Steps:

  1. Review the existing documentation to identify areas that need improvement. ๐Ÿ”
  2. Collect and prepare screenshots and visuals to be integrated into the documentation. ๐Ÿ“ธ
  3. Collaborate with project team members to gather detailed feature descriptions and usage examples. ๐Ÿ‘ฅ
  4. Create or update documentation pages as per the proposed enhancements. ๐Ÿ“
  5. Test the documentation with new users to gather feedback and make further improvements. ๐Ÿง
  6. Ensure that the documentation is kept up-to-date with the project's development. ๐Ÿ”„

Additional Information:

  • Improving project documentation is an ongoing process that benefits both the project team and the user community. ๐ŸŒŸ
  • Collaboration among project members, including developers, technical writers, and designers, is essential to achieve comprehensive documentation. ๐Ÿค
  • Feedback from users is highly valuable and should be encouraged to maintain documentation quality. ๐Ÿ—ฃ๏ธ

Note: Please link this issue to relevant project milestones or epics for tracking and prioritization. ๐ŸŽฏ

Let's work together to enhance our project's documentation and provide users with a better experience. ๐Ÿ“–๐ŸŒŸ

DB not Updating After Creation of SQLHandler.py

I have reran the application after the creation of the SQLHandler.py and now new jobs are not showing up, the last job posted is from two days ago, there is a big in the saving of data into the sql database
Screenshot 2023-10-20 at 12 25 35โ€ฏPM

OpenAI module not working because need to use AWS Secrets Manager

I need to update the OpenAI module to use the secrets manager keys, right now it is dependent on using local environment variables which are not scalable.

After I update the code, will also need to save a copy of it to the mypyutils repo and test that it works

Add user input in setup.py for the API key so that a pip install can get the code running

Need to do this so setup.py can also set up the API key during installation as another way to setup the project:

To permanently save the API key from setup.py to the user's environment, you can make use of environment variables. Here's how you can do it:

  1. Open the setup.py file in a text editor.
  2. Add the following code at the beginning of the file:
import os

# Check if the API key is already set as an environment variable
if 'API_KEY' not in os.environ:
    api_key = input('Enter your API key: ')
    os.environ['API_KEY'] = api_key

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.