Coder Social home page Coder Social logo

yu8970 / arxivqa Goto Github PK

View Code? Open in Web Editor NEW

This project forked from taesiri/arxivqa

0.0 0.0 0.0 52.98 MB

WIP - Automated Question Answering for ArXiv Papers with Large Language Models

Home Page: https://huggingface.co/datasets/taesiri/arxiv_qa

Python 100.00%

arxivqa's Introduction

Automated Question Answering with ArXiv Papers

Latest 25 Papers

  • JudgeLM: Fine-tuned Large Language Models are Scalable Judges - [Arxiv] [QA]
  • Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time - [Arxiv] [QA]
  • HyperFields: Towards Zero-Shot Generation of NeRFs from Text - [Arxiv] [QA]
  • Controlled Decoding from Language Models - [Arxiv] [QA]
  • LLM-FP4: 4-Bit Floating-Point Quantized Transformers - [Arxiv] [QA]
  • LightSpeed: Light and Fast Neural Light Fields on Mobile Devices - [Arxiv] [QA]
  • TD-MPC2: Scalable, Robust World Models for Continuous Control - [Arxiv] [QA]
  • CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images - [Arxiv] [QA]
  • DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior - [Arxiv] [QA]
  • QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models - [Arxiv] [QA]
  • Detecting Pretraining Data from Large Language Models - [Arxiv] [QA]
  • ConvNets Match Vision Transformers at Scale - [Arxiv] [QA]
  • A Picture is Worth a Thousand Words: Principled Recaptioning Improves Image Generation - [Arxiv] [QA]
  • An Early Evaluation of GPT-4V(ision) - [Arxiv] [QA]
  • CLEX: Continuous Length Extrapolation for Large Language Models - [Arxiv] [QA]
  • TiC-CLIP: Continual Training of CLIP Models - [Arxiv] [QA]
  • Woodpecker: Hallucination Correction for Multimodal Large Language Models - [Arxiv] [QA]
  • What's Left? Concept Grounding with Logic-Enhanced Foundation Models - [Arxiv] [QA]
  • Dissecting In-Context Learning of Translations in GPTs - [Arxiv] [QA]
  • In-Context Learning Creates Task Vectors - [Arxiv] [QA]
  • KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval - [Arxiv] [QA]
  • TRAMS: Training-free Memory Selection for Long-range Language Modeling - [Arxiv] [QA]
  • Moral Foundations of Large Language Models - [Arxiv] [QA]
  • SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding - [Arxiv] [QA]
  • RoboDepth: Robust Out-of-Distribution Depth Estimation under Corruptions - [Arxiv] [QA]

List of Papers by Year

Acknowledgements

This project is made possible through the generous support of Anthropic, who provided free access to the Claude-2.0 API.

arxivqa's People

Contributors

taesiri avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.