Coder Social home page Coder Social logo

cubarimoe's Introduction

Cubari.moe

An image proxy powered by the Cubari reader.

Testing Supported By
BrowserStack

Prerequisites

  • git
  • python 3.6.5+
  • pip
  • virtualenv

Install

  1. Create a venv for cubarimoe in your home directory.
virtualenv ~/cubarimoe
  1. Clone cubarimoe's source code into the venv.
git clone https://github.com/appu1232/cubarimoe ~/cubarimoe/app
  1. Activate the venv.
cd ~/cubarimoe/app && source ../bin/activate
  1. Install cubarimoe's dependencies.
pip3 install -r requirements.txt
  1. Change the value of the SECRET_KEY variable to a randomly generated string.
sed -i "s|\"o kawaii koto\"|\"$(openssl rand -base64 32)\"|" cubarimoe/settings/base.py
  1. Generate the default assets for cubarimoe.
python3 init.py
  1. Create an admin user for cubarimoe.
python3 manage.py createsuperuser

Start the server

  • python3 manage.py runserver - keep this console active

Now the site should be accessible on localhost:8000

Other info

Relevant URLs (as of now):

  • / - home page
  • /admin - admin view (login with created user above)
  • /admin_home - admin endpoint for clearing the site's cache

cubarimoe's People

Contributors

aceofvase avatar algoinde avatar appu1232 avatar brutuz avatar dependabot[bot] avatar einlion avatar funkyhippo avatar fyren avatar henrik9999 avatar ilevn avatar joshdabosh avatar korbeil avatar observeroftime avatar plantysnake avatar plax-00 avatar pyreko avatar rapptz avatar xetera avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cubarimoe's Issues

Gdrive links are not working anymore?

Hi @subject-f ,

I used facaccimo to generate gdrive gist links.
İt was working fine until today. I am getting this error.

Links are shared to everyone and accessible, I checked everything is fine. But I'm not able to view them on cubarimoe? Any help appreciated please.

Images failed to load. This gallery uses Google Drive links. This might help:

a) Log in to a Google account in this browser for the images to display.

nHentai proxy stop working

When attempt to access nHentai proxy page, cubari.moe return
"Could not complete the request due to the following error: External API error. 500 . Server Error"
AFAIK, this is introduced by nHentai implement DDoS protection on their pages and the usual page-scraping is no more. This is also mentioned in Dar9586/NClientV2#402.

Access of gists on private repos

Would it be possible to add support for accessing gist\json files on private repositories on GitHub?

I thought of something like optionally adding the Personal Access Token (with repo scope) to the end of the URL and passing it on requests if present.

Something like https://cubari.moe/gist/raw/<usual_gist_path>/<optional_PAT>. Would be a matter of checking whether the last URL segment was 40 characters long and started with ghp_ to decide if the Authorization: token <PAT> header should be included in the HTTP request for the raw file.

Having the token on local\remote storage should be more convenient to the user and likely more secure as well since it can't be decoded from the URL

Support for Catbox.moe album feature as a proxy.

Hello, was wondering if this was possible or not. I'm no programming genius but the way it works seems familiar with how Imgur works. Imgur has been giving me a lot of issues recently so I decided to switch to Catbox. Thanks in advance!

Laggy scrolling on links with lots of pages

Specifically on links with more than 100 pages while the reader layout is top-to-bottom.
This only happens on chrome, firefox has no issues using the same settings. Also checked with hardware acceleration on and off with no changes.
Laggy scrolling starts at ~100 pages and it gets worse with more pages.
Tested with tankoubon releases on nhentai and personal uploads on imgur.

Support for OneDrive folders

I currently have a stand-alone script to parse a OneDrive folder (via share link like https://1drv.ms/...) and generate a Cubari JSON, however it has a couple of downsides compared to a native proxy:

  • The JSON often becomes huge since it has to list each image individually and the OneDrive URLs are not exactly short. With a multiple chapters (assumes each sub-folder is a chapter) line-count can escalate quickly.
  • It's not "live". If I modify something in the folder I have to rerun the script to manually update the URLs in the JSON.

I'll leave the script below, it should be short and simple enough to be easily understood despite the lack of comments. Any chance OneDrive support could be integrated into Cubari based on it?

Script code
from base64 import urlsafe_b64encode
from datetime import datetime
from json import dumps as json_string
from requests import get as get_url
from sys import argv
import re

FOLDER_CONTENTS_URL = 'https://api.onedrive.com/v1.0/shares/u!{}/driveItem?$expand=children'
FILE_CONTENTS_URL = 'https://api.onedrive.com/v1.0/shares/u!{}/root/content'

def parse_folder(url: str) -> dict:
    folder = get_url(FOLDER_CONTENTS_URL.format(b64(url))).json()
    if not folder.get('children', []):
        print(f'Not a OneDrive folder - {url}')
        return
    # if __name__ == '__main__': print(json_string(folder, indent=2))
    try:
        ctime = int(
            datetime.fromisoformat(
                folder.get('createdDateTime', '').replace('Z', '+00:00')
            ).timestamp()
        )
    except ValueError:
        ctime = int(datetime.utcnow().timestamp())
    title = folder.get('name')
    files = []
    folders = []
    for file in folder.get('children', []):
        if 'folder' in file:
            folders.append(file.get('webUrl'))
        elif 'file' in file and 'image' in file.get('file', {}).get('mimeType', ''):
            files.append(
                FILE_CONTENTS_URL.format(
                    b64(file.get('webUrl'))
                    or file.get('@content.downloadUrl')
                )
            )

    return {'title': title, 'date': ctime, 'files': files, 'folders': folders}


def b64(onedrive_link: str) -> str:
    return str(urlsafe_b64encode(onedrive_link.encode()), 'utf-8').rstrip('=')

if __name__ == '__main__':
    url = argv[1] if len(argv) > 1 else input('Folder share URL: ')
    print(url)
    chapters = {}
    api = parse_folder(url)
    gist = {
        'title': api.get('title', '<required, str>'),
        'description': '<required, str>',
        'artist': '<optional, str>',
        'author': '<optional, str>',
        'cover': '',
        'pages': 0,
        'chapters': {}
    }
    if api.get('folders'):
        print("It's a folder! Recursing...")
        exp = re.compile(
            r'^(?:Ch\.? ?|Chapter )?0?([\d\.,]{1,5})(?: - )?',
            re.RegexFlag.IGNORECASE
        )
        for folder in api['folders']:
            recurse = parse_folder(folder)
            search = re.search(exp, recurse['title'])
            if search:
                chapter = search.group(1)
                title = recurse['title'].replace(search.group(), '')
            else:
                chapter = str(folder.__index__)
                title = recurse['title']
            gist['chapters'][chapter] = {
                'title': title,
                'last_updated': recurse['date'],
                'groups': {
                    'OneDrive': recurse['files']
                }
            }
            gist['pages'] += len(recurse['files'])
            if not gist['cover']:
                gist['cover'] = recurse.get('files', [])[0]
    else:
        gist['chapters']['1'] = {
            'title': api.get('title', '<optional, str>'),
            'last_updated': api.get('date'),
            'groups': {
                'OneDrive': api.get('files')
            }
        }
        gist['pages'] = len(api.get('files'))
        gist['cover'] = gist.get(
            'cover',
            api.get('files', ['<optional, str>'])[0]
        )
    print(json_string(gist, indent=4))

Some MD series fail with HTTP status 500

An example link: https://cubari.moe/read/mangadex/a1f8f17a-6c51-4a35-9205-bc70bb5fa826/

According to basic testing, it appears that the failure is related to the cors proxy used. For example, using the above link as a guide, trying to access one of the API urls used in the proxy source gives you a bit.ly link to their github instead of the requested resource:
https://cors.bridged.cc/https://api.mangadex.org/manga/a1f8f17a-6c51-4a35-9205-bc70bb5fa826?includes[]=cover_art

I am unsure of why this would only fail for some series and not all, however.

[Feature Request] Add support for localhost

Hi,

İf you could add support for cubari.moe to read json file from localhost that would be useful for users who host cubari.moe on home server and want to read their chapters from local pc offline.

I would appreciate if you could add this feature support for offline usage please?

Video-based animation ("gifv") support

We maintain some automatically generated Cubari sources over at https://github.com/catgirl-v/cubari. One of the sources (ADHDinos, scraped from reddit) contains an animated strip (chapter 70), and as far as I can tell, reddit will only serve video formats. Currently that just shows up as broken. Imgur does this too with its "gifv" (which is just mp4 with a different extension).

Video codecs have been replacing the aging animated GIF format over the past decade because they offer much better compression (higher quality at much lower file sizes). Because we run our scrapers on github actions, we can't really afford to transcode the files on our end.

I don't think Cubari would need to do much to support these besides detect the source format and use a <video> element instead of <img>.

Cubari: generic JSON proxy

The current gist proxy is limited to GitHub users. It would be desirable to create a generic proxy—much like the FoolSlide one—that accepts arbitrary JSON URLs (which adhere to the schema) in order to proxy any anonymous paste service. If you're OK with this idea, I can work on a PR.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.