Coder Social home page Coder Social logo

cofacts / rumors-api Goto Github PK

View Code? Open in Web Editor NEW
109.0 12.0 26.0 6.04 MB

GraphQL API server for clients like rumors-site and rumors-line-bot

Home Page: https://api.cofacts.tw

License: MIT License

JavaScript 99.78% Dockerfile 0.06% Pug 0.16%
rumors elasticsearch fact-checking crowdsourcing

rumors-api's Introduction

rumors-api

CI test Coverage Status

GraphQL API server for clients like rumors-site and rumors-line-bot

Configuration

For development, copy .env.sample to .env and make necessary changes.

For production via rumors-deploy, do setups in docker-compose.yml.

Development

Prerequisite

First-time setup

After cloning this repository & cd into project directory, then install the dependencies.

$ git clone --recursive [email protected]:cofacts/rumors-api.git # --recursive for the submodules
$ cd rumors-api

# This ensures gRPC binary package are installed under correct platform during development
$ docker-compose run --rm --entrypoint="npm i" api

OAuth2

If you want to test OAuth2 authentication, you will need to fill in login credentials in .env. Please apply for the keys in Facebook, Twitter and Github respectively.

Media

Cofacts API uses Google cloud storage to store user reported media files (image, audio, video files).

Please populate the following fields in .env if you want to test this.

  • GCS_CREDENTIALS: Service account's JSON file content.
  • GCS_BUCKET_NAME: The Google cloud storage bucket to store files. It must grant the service account the following permission:
  • GCS_MEDIA_FOLDER: The prefix for stored files. Trailing / is required if you want all root level folders are put under your specified folder.

Start development servers

$ mkdir esdata # For elasticsearch DB
$ docker-compose up

This will:

  • rumors-api server on http://localhost:5000. It will be re-started when you update anyfile.
  • rumors-site on http://localhost:3000. You can populate session cookie by "logging-in" using the site (when credentials are in-place in .env). However, it cannot do server-side rendering properly because rumors-site container cannot access localhost URLs.
  • Kibana on http://localhost:6222.
  • ElasticSearch DB on http://localhost:62222.
  • URL resolver on http://localhost:4000

To stop the servers, just ctrl-c and all docker containers will be stopped.

Populate ElasticSearch with data

Ask a team member to send you nodes directory, then run:

$ docker-compose stop db

to stop db instance.

put the nodes directory right inside the esdata directory created in the previous step, then restart the database using:

$ docker-compose start db

Detached mode & Logs

If you do not want a console occupied by docker-compose, you may use detached mode:

$ docker-compose up -d

Access the logs using:

$ docker-compose logs api     # `api' can also be `db', `kibana'.
$ docker-compose logs -f api  # Tail mode

About test/rumors-db

This directory is managed by git submodule. Use the following command to update:

$ npm run rumors-db:pull

Lint

# Please check lint before you pull request
$ npm run lint
# Automatically fixes format error
$ npm run lint:fix

Test

To prepare test DB, first start an elastic search server on port 62223:

$ docker run -d -p "62223:9200" --name "rumors-test-db" docker.elastic.co/elasticsearch/elasticsearch-oss:6.3.2
# If it says 'The name "rumors-test-db" is already in use',
# Just run:
$ docker start rumors-test-db

Then run this to start testing:

$ npm t

If you get "Elasticsearch ERROR : DELETE http://localhost:62223/replies => socket hang up", please check if test database is running. It takes some time for elasticsearch to boot.

If you want to run test on a specific file (ex: src/xxx/__tests__/ooo.js), run:

$ npm t -- src/xxx/__tests__/ooo.js

When you want to update jest snapshot, run:

$ npm t -- -u

Tests requiring additional env vars

  • media-integration
    • Requires GCS_CREDENTIALS and GCS_BUCKET_NAME to be set.
    • will write to the specified bucket.
  • fetchStatsFromGA

Deploy

Build docker image. The following are basically the same, but with different docker tags.

$ docker build -t cofacts/rumors-api:latest .

Run the docker image on local machine, then visit http://localhost:5000. (To test functions involving DB, ElasticSearch DB must work as .env specified.)

$ docker run --rm -it -p 5000:5000 --env-file .env cofacts/rumors-api

Cronjob / management scripts

Clean up old urls entries that are not referenced by any article & reply

The urls index serves as a cache of URL scrapper and will enlarge as ListArticle is invoked with URLs. The following script cleans up those urls that no article & reply currently uses.

$ docker-compose exec api node_modules/.bin/babel-node src/scripts/cleanupUrls.js

Fetching user activities from BigQuery

  • The user activities of website & chatbot LIFF web views are collected and synced to BigQuery using built-in GA4 BigQuery Links.

    • LIFF and website should be different web streams on GA4.
    • Streams are differentiated using stream_id on BigQuery.
    • In GA4 BigQuery Link, both "Daily" (results in events-YYYYMMDD tables) and "Streaming" (results in events_intraday_YYYYMMDD tables) are used.
    • The separation of table respects "Reporting time zone" on Google Analytics.
  • Make sure the following params are set in .env: LINE_BOT_EVENT_DATASET_ID, GA4_DATASET_ID, GA_WEB_STREAM_ID, GA_LIFF_STREAM_ID, TIMEZONE.

  • Sync script will authenticate to BigQuery using Application Default Credentials.

    • Please create a service account under the project, download its key and use GOOGLE_APPLICATION_CREDENTIALS env var to provide the path to your downloaded service account key. See documentation for detail.
    • Please make sure the service account has read-only access to both LINE_BOT_EVENT_DATASET_ID and GA4_DATASET_ID.
  • Make sure the service account behind the key in previous step have the following minimum roles:

    • BigQuery Job User on the GCP project
    • BigQuery Data Viewer on the dataset specified by LINE_BOT_EVENT_DATASET_ID, and the dataset specified by GA4_DATASET_ID.
  • To fetch stats for the current date, run:

$ node_modules/.bin/babel-node src/scripts/fetchStatsFromGA.js
  • For more options, run the above script with --help or see the file level comments.

Removing article-reply from database

  • To set an article-reply to deleted state on production, run:
$ node build/scripts/removeArticleReply.js --userId=<userId> --articleId=<articleId> --replyId=<replyId>
  • For more options, run the above script with --help or see the file level comments.

Block a user

  • Please announce that the user will be blocked openly with a URL first.
  • To block a user, execute the following:
$ node build/scripts/blockUser.js --userId=<userId> --blockedReason=<Announcement URL>
  • For more options, run the above script with --help or see the file level comments.

Replace the media of an article

  • This command replaces all the variants of a media article's file on GCS with the variants of the new file.
$ node build/scripts/replaceMedia.js --articleId=<articleId> --url=<new-file-url>

Generating a spreadsheet of new article-categories for human review

  • To retrieve a spreadsheet of article categories of interest after a specific timestamp, run:
$ node build/scripts/genCategoryReview.js -f <ISO Timestamp>
  • For more options, run the above script with --help or see the file level comments.

Write article-categories result back to DB and generate ground truth files

First, fill in GOOGLE_SHEETS_API_KEY in .env. The API key can be created from credentials page of Google Cloud Platform. We will only access Google Sheets API using this key.

Then, run:

$ node -- build/scripts/genBERTInputArticles.js -s <Google spreadsheet ID> -o <Output directory>

The ground truth files in JSON will be written to output directory

Generate a new AI reply for the specified article

This command generates a new AI reply even if the article already has an AI reply before. Suitable for the scenario when the existing AI reply is not appropriate.

$ node build/scripts/genAIReply.js -a <articleId> --temperature=1

One-off migration scripts

Fill in urls index and hyperlinks field for all articles & replies

First, make sure .env is configured so that the correct DB is specified. Then at project root, run:

$ node_modules/.bin/babel-node src/scripts/migrations/fillAllHyperlinks.js

This script would scan for all articles & replies to fill in their hyperlinks field, also populates urls index. The urls index is used as cache. If an URL already exists in urls, it will not trigger HTTP request.

Generate User instances for backend users

First, make sure .env is configured so that the correct DB is specified, you might want to create a snapshot before running the script. Then at project root, run:

$ node_modules/.bin/babel-node src/scripts/migrations/createBackendUsers.js

This script would scan for all the user references in analytics, articlecategoryfeedbacks, articlereplyfeedbacks, articles, replies, replyrequests, create users for those that are not already in db and updates all the docs. See the comments at the top of the script for how users are referenced in each doc.

Troubleshooting

Failed to load gRPC binary on Mac

If rumors-api server fails to start due to the following error:

Cannot find module '/srv/www/node_modules/grpc/src/node/extension_binary/node-v72-linux-x64-glibc/grpc_node.node'

try running:

npm rebuild --target_platform=linux --target_arch=x64 --target_libc=glibc --update-binary

Legal

LICENSE defines the license agreement for the source code in this repository.

LEGAL.md is the user agreement for Cofacts data users that leverages Cofacts data provided by API or via cofacts/opendata.

rumors-api's People

Contributors

changhc avatar dannynash avatar darkbtf avatar eliot-chang-lb avatar godgunman avatar johnson-liang avatar kytu800 avatar lucienlee avatar mrorz avatar neighborhood999 avatar nonumpa avatar quad avatar renovate-bot avatar renovate[bot] avatar sayuan avatar yanglin5689446 avatar yhsiang avatar ztsai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rumors-api's Issues

ListArticle 或 Search 支援新的 filter / sort 方式

讓使用者可以使用下面的 filter:

  • 我標記成「等等回應」的文章( #34
  • 沒人標記成「等等回應」的所有文章( #34
  • 文章 tag ( #32 )
  • 我回應過的 article
  • 我送出過 replyRequest 的 article (我想知道)( Related: cofacts/rumors-site#13
  • 所有人都認為現有 reply 沒用的 article / 照無用度 sort (「正向」+「負向」遞增排序)
  • 使用「各文章最近一次被回報的時間」排序
  • 回應中有「含有真實資訊」or「含有不實資訊」or「非文章」
  • 回應中不含有「含有真實資訊」or「含有不實資訊」or「非文章」

以上 filter 希望可以複選(條件通通 AND 在一起)。

Needs to enlarge body size limit of koa-bodyparser

View details in Rollbar: https://rollbar.com/mrorz/rumors-api/items/6/


Error: request entity too large
  File "/srv/www/node_modules/raw-body/index.js", line 196, in readStream
        return done(createError(413, 'request entity too large', 'entity.too.large', {
  File "/srv/www/node_modules/raw-body/index.js", line 110, in executor
        readStream(stream, encoding, length, limit, function onRead (err, buf) {
  File "/srv/www/node_modules/raw-body/index.js", line 109, in getRawBody
      return new Promise(function executor (resolve, reject) {
  File "/srv/www/node_modules/co-body/lib/form.js", line 35, in Function.module.exports [as form]
      return raw(inflate(req), opts)
  File "/srv/www/node_modules/koa-bodyparser/index.js", line 89, in parseBody
          return yield parse.form(ctx, formOpts);
  File "native", line unknown, in next
  File "/srv/www/node_modules/co/index.js", line 65, in onFulfilled
            ret = gen.next(res);
  File "/srv/www/node_modules/co/index.js", line 54, in <unknown>
        onFulfilled();
  File "/srv/www/node_modules/co/index.js", line 50, in Object.co
      return new Promise(function(resolve, reject) {
  File "/srv/www/node_modules/co/index.js", line 118, in Object.toPromise
      if (isGeneratorFunction(obj) || isGenerator(obj)) return co.call(this, obj);

Internal server error when logging in in search result

Steps to reproduce:

  1. Go to article list page. If already logged in, logout first.
  2. In search box type in Chinese characters, perform search
  3. Login using any method
  4. sees internal server error.

Root cause: URLs are not encoded, but redirect location requires so.

Automated script for updating elasticsearch from Airtable

需要定時從 airtable 更新資料進 elastic search。

除了 cron job script 之外,重要的是要能自動化判斷相似的文章——或者是保守地差有點多的 rumor 都視為「不一樣」(但這樣的話,根據現在的搜尋評分機制,就會找不到最好的文章 Orz)

Show related paragraphs in search result, instead of the first paragraphs

Scenario

  1. LINE users seems not happy with the current "similarity" and tends to create new articles all the time. By showing the exact match of sentences may help them choose the identical articles.

  2. Snippets / highlights can help Editors find interesting articles in the "related replies / articles" section.

Proposed solutions

API server should return matched paragraphs in each search result. LINE bot & website should display the search result in a manner similar to the snippets in Google Search.

This is achievable via elastic search "highlighting" function.

https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-highlighting.html

[line] 申購專屬 ID、申請 github organization、g0v domain name

需要與大家討論什麼 ID 好~

除了 Line ID,英文名字還會用在哪裡:

  1. 域名
  2. line ID
  3. Github 的 organization 名

「真的假的」與其他闢謠網站不同的地方:

  1. 不做內容生成,而是 curator / 查詢 / 入口
  2. 群眾協作
  3. 未來除了謠言查證,說不定也會做讓網友回報「爭議性**」然後附上「反駁論述連結」。但或許要先把謠言查證的搜尋先專心做好,樹立口碑 (謠言反駁的搜尋引擎?) 才能往這個方向做。

想到的名字:

  1. 謠言類
    RumorsHasIt
    SendMeRumors
    rumor search / rumearch
    findrumors

  2. 「真的假的」類
    Realllly
    for real?
    usodaro
    majide
    honnto


其他網站:(內容生成、媒體類 )
http://www.snopes.com/
https://en.wikipedia.org/wiki/TruthOrFiction.com

Index the title and content of URLs in page

想要解決的問題:

  1. Retrieval 時遇到連結基本上就不能做任何事,即使連結背後的文章高度相關也無法找到。
  2. Line bot 顯示找到的文章給使用者選時,對連結相當不友善
  3. 編輯要點進去看有點麻煩,而且 related article 對連結的效果不彰

如果可以記錄每個連結的

  1. Title
  2. Canonical URL (after redirect)
  3. Content

一併加入 Full text search 的話會方便很多。

Enhance "NOT ARTICLE" functionality

  • 改名「不在查證範圍」
  • 新增樣板「這是廣告活動,活動期間到⋯⋯」引導編輯填寫活動時間,但編輯也可以不填。
  • 新增樣板「訊息僅含有失效連結」
  • 編輯界面增加「查證範圍」連結(for 編輯);當有回應是「不在查證範圍」時,提供使用者一個連結說「查證範圍是什麼」(for 使用者)

編輯與刪除回應

回應作者可以:

  • 編輯自己的回應(建立新的 replyVersion)

Connection 作者可以:

  • 把自己蓋的 connection 刪掉(做一個 delete flag)

原本討論說留言作者是否可以刪掉別人建立的 reply connection
但由於他是 CC0,所以不應該讓 reply 作者刪掉別人的 connection

Crawler framework

Each crawler should include 2 parts:

  1. Scraper: Given timestamp, store all crawled documents that comes after the timestamp, to a WARC archive
  2. Compiler: Given a WARC archive filename and a timestamp, parse the crawled documents into latest structured formats.

Requires documentation on how to have crawlers up and running.

Why Scraper-Compiler separation?

Because the structured format for rumor-db is subject to change, we may need to re-parse docs into latest formats. It would help if we store the previously crawled pages into WARC archives, and have a parser that parses data into the latest format.

The crawled website is subject to change as well. The WARC format includes parsed date in header. Compilers can use that to deal with different versions of the crawled website.

Why WARC?

It is used by the Internet Archive and Common Crawl. Currently there is a great parsing / generation library for python. However it has little support in NodeJS :/

http://www.archiveteam.org/index.php?title=Wget_with_WARC_output
https://github.com/internetarchive/warc

How individual crawlers are integrated with the framework?

Each crawler should be a docker image on Docker Hub.The crawler and the framework communicates through mounted file systems. The framework will create a directory, put input.json with input arguments in it, mount it under /data inside the docker and docker run the crawler. The crawler is expected to write output.warc and output.json, as the output of the Scraper and the Compiler, respectively.

Add new article type "Sarcasm"

未來預計讓小編回應時,標記一篇 article 為「含有不實消息」、「不含不實消息」或「非轉傳訊息」三種 type 的其中之一。在 LINE bot 回應時,如果一篇 article 有複數個小編回應,會顯示「這則訊息被 3 個人標記為『含有不實訊息』、1 人標記為『不含不實訊息』」。

由於有些流言其實是屬於諷刺性** / 挖苦、或是笑話,

其中或許真的「含有不實訊息」,但目的不是要讓人信以為真,而是作為笑點,標記為「含有不實訊息」感覺好像也有點怪(畢竟也不會有「闢謠」文章不識趣地點破他)。

或許我們應該增加一個「sarcasm(諷刺或挖苦性**)」type 來標記這類消息?

README update about yarn environment

As the meeting note find out, it is not necessary to assume developers don't have node environment installed.

If the developer has their node environment installed, current install script can be more straight-forward. At least, yarn install can be carried out before docker-compose up, providing a smoother first-time developing experience.

Enhance duplicate article check

When reviewing #53 , @darkbtf mentioned that we can use hashed article as article id when indexing the article in CreateArticle.

This would greatly simplify the duplication check in #53 -- We can just go ahead to index, and do the fetching only when DB insertion fails. This reduces 1 RTT between the server and the database for normal article insertions.

TODO:

  1. Use article content as _id when indexing article in CreateArticle
  2. Simplify the duplication check in CreateArticle

標記文章關鍵字分類,可以 search by label

3/18 聚會中,維基社群以及阿孝老師都曾經建議要讓不同知識領域的人可以分工回應文章。實作 user generated label 似乎是一個不錯的方式。

RFC:在 rumors-api 實作一個類似 hackpad / niconico 的 label,符合:

  1. 使用者可以自由對 article 標記 label
  2. 輸入 label 時會會用現有 label 進行 autocomplete
  3. 可以列出含有特定 label 的文章

因為一開始一定沒啥標籤,我覺得可以之後再來討論 label 太多是否要合併之類的事情。立委投票指南的「議題」 似乎也是類似的實作方式。

實作方式為直接在 articles 開一個 field 存放 array of text,不另外開 index。

如何處理尚未有闢謠文章的 0-day rumors

中秋節的「秋刀魚兩個洞是線蟲」訊息,以及近日來「剛收到消息,川普夫人的驕車在纽约川普大厦前被人們燒毀」的消息,在瘋狂轉載的當下都是尚未有闢謠文章的。

雖然現在「秋刀魚兩個洞是線蟲」已經找得到新聞闢謠了,但後者這種無來由的謠言很可能永遠不會有人闢謠。

文章「購物車」:標記「我等等回」

3/18 會前討論 中提到,目前未回答的謠言太多,使小編沒有動力。

如果可以讓小編對文章標記「我想回答」,然後在文章列表多兩個 filter:「我想回答」與「我回答過」,那小編就可以建立自己的 TODO list,不會說找不到之前看過的想回答的謠言,也因為 TODO 更明確而增加回答動力。

另外,編輯者小聚這類的場合裡,有「我想回答」的計數可以幫助小編避免重工的問題。

RFC:

實作方式想要做成直接存在 articles index 裡頭。
開一個 pendingRepliers[],紀錄 {userId, createdAt}
在文章列表,可以列出「10 分鐘前有人表示想要回答」小字,告知其他小編說有人在多久之前說他要寫回答。

To yes, or not to yes

好久沒用 用錯了 正在想辦法殺掉 

英文名字: To yes, or not to yes

中文名字(可選):「真的假的」

說文解字:
  我的英文能力不行,只是坑主說「個簡潔有力、朗朗上口、深入人心的好名字」,我的直覺出現一個「生存還是毀滅」(英語:To be, or not to be)的詞。

  我想了兩個單字來替換,一個是「truth」(好像不押韻)一個是「yes」。

  英文好的人請繼續,我的能力目前到此為止。


Allow article lists to filter by reply request counts

As discussed on 1122 and 1025, only articles that has 2 or more reply requests are worth replying.

Since the 1st user sending in the link cannot get the response, if an article really has one reply request for a long time, it means that the reply will never be used in the LINE bot in the future.

The editors should have the option to list only the articles that has some more reply requests.

[Gamification] Add "level" field for users

The level is determined by normal article reply count.
The number of article reply required to get to level n is nth number in Fibonacci number list.

  • 0 article replies -> lv 0
  • 1 article reply -> lv 1
  • 2 article replies -> lv 2
  • 3 article replies -> lv 3
  • 5 article replies -> lv 4
  • 8 article replies -> lv 5
  • 13 article replies -> lv 6
    ... etc

The fields include:

  • level: 0~n, current level
  • levelProgress.total: The number of additional normal article reply count required to reach the next level. For level 0, 1, and 2, it's 1. For level 3, it's 2. For level 4, it's 3, and so on.
  • levelProgress.current: The current number of additional normal article reply count collected within this level. Ranges from 0 to levelProgress.total.

Reference: https://hackmd.io/s/B1bb-hXhz#%E7%B7%A8%E8%BC%AF%E7%A4%BE%E7%BE%A4%E7%9A%84%E7%87%9F%E9%81%8B%E6%96%B9%E5%BC%8F

用現有資料庫來驗證搜尋系統

對於 crowdsource 的資料庫搜尋,一個良好的搜尋系統應該要有下面的特性:

  1. 搜尋已經有 answer 的 rumor 時,應該要找到該則 answer。
  2. 搜尋還沒有 answer、但全文有存在資料庫裡的 rumor 時,應該要回該則 rumor,然後告知使用者有其他人回報但是沒人回答。
  3. 搜尋全文不在系統裡的 rumor 時,如果不是很有把握,就不應該硬回傳一個結果。

驗證方式如下:

  • 1、2 兩個特性,可以寫個程式把所有資料庫裡面的 rumor 一筆一筆地都搜尋一次,看看結果是不是該則被選中的 rumor,就能驗證。
  • 3 的部分可以透過從資料庫裡面拿掉一則文章,然後搜尋他,如果找不到東西就正常,如果找到了東西就要人工看看他算不算有關。

調整推薦公式的時候,如果能把上面的驗證機制自動化,改完程式就能跑跑看,就能對公式的效果更有把握。

透過 API 來新增流言

現在新增流言的機制是

  1. 在 line 與 chatbot 交談,訊息就會進到 airtable
  2. 用程式把 airtable 裡的資料寫到資料庫裡

如果有 API 可以對資料庫直接新增資料的話,就可以省去 airtable 這一段。
但相反的,整個編輯流言的網站就要有完整的功能(列出流言、檢視流言與答案、編輯與送出等等),才能正常地收資料。

「真的假的」英文名字徵集

「真的假的」是一個快速驗證謠言的 ChatBot 系統,透過群眾協作查證社群上不知道『真的假的』的分享訊息。現在「真的假的」需要一個英文名字,用在登記 github organization、 line ChatBot 帳號 ID 、網域上。

子曰:「名不正,則言不順;言不順,則事不成」

「真的假的」需要一個簡潔有力、朗朗上口、深入人心的好名字,以便於大眾找到我們!
在此我們向大家徵集「真的假的」英文名字。

如果你在下面的名字們裡面看到了自己覺得好的名字,可以按一下右上角的2017-03-18 9 17 08,選擇 👍 這個符號來投票復議;

如果你有了靈感,請大家依照以下格式,在這篇 issue 留下你想到的名字。

每一個 comment 只能留下一個名字,請分開記錄下你每一個名字的點子!


格式

英文名字: 一個鏗鏘有力的名字
中文名字(可選):如果你認為「真的假的」跟你雋永的英文名字不般配,請留下你心中天造地設的名字
說文解字:解說你取名背後的寓意


範例

英文:cofacts
說文解字: cofacts = collaborative + facts ,為群眾協作所產出的事實

「真的假的」 感謝您所做的貢獻!

List submitted articles of a LINE user

As discussed in 20180207

It can be used in:

  • When the user is banned, other users can see what kind of article will cause one being banned
  • Can have a list of article called "I submitted before"

API key & CORS domains

In order to prevent illegal immigrants DDoS attacks from browsers, the API server should only open to clients that has API keys.

We need a wall a mechanism to allow developers to apply for a key. If the developer's client is web-based, they should specify the domains so that we open up CORS according to the API key.

For clients that send request through backend http libraries, API keys are still required, but no need to specify domains.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.