Coder Social home page Coder Social logo

opentermsarchive / engine Goto Github PK

View Code? Open in Web Editor NEW
107.0 10.0 30.0 14.11 MB

Tracks contractual documents and exposes changes to the terms of online services.

Home Page: https://opentermsarchive.org

License: European Union Public License 1.2

JavaScript 100.00%
history tos terms terms-of-service terms-and-conditions database online-services sdg-16 sdg-9 sdg-17

engine's People

Contributors

aaronjsugarman avatar adrienfines avatar allcontributors[bot] avatar ambnum-bot avatar amustache avatar clementbiron avatar cquest avatar dependabot[bot] avatar gatienh avatar guillett avatar jetlime avatar karnauskas avatar kissaki avatar lverneyperen avatar martinratinaud avatar mattisg avatar michielbdejong avatar ndpnt avatar ota-release-bot avatar pdehaye avatar siegridhenry avatar thouriezperen avatar tosbackcgusbridge-bot avatar vviers avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

engine's Issues

Obtenir les CGUs de plusieurs fournisseurs

  • Facebook
  • Twitter
  • Snapchat

Un fichier JSON de description par fournisseur de service, dans le dossier providers à la racine.

{
	"serviceProviderName": "Facebook",
	"langs": [ "en", "fr" ],
	"langStrategy": "accept-header",
	"documents": {
		"tos": {
			"url": "https://…",
			"contentSelector": ".UI_selector",
			"noiseSelectors": [
				"script",
				"style",
				".csrf_token",
				"#ad_unique_id"
			],
			"sanitizationPipeline": [
				"convert-emojis",
				"markdown"
			]
		}
	}
}

Incertitudes

  1. langs est-il à la racine ou dans chaque entrée de document ?
  2. Les noiseSelectors et sanitizationPipeline sont-ils une extension d'une base ou redéfinis intégralement ?

Arbitrages à ce stade

  1. langs est à la racine, on verra si des fournisseurs font varier les langues par document.
  2. On redéfinit intégralement les deux listes, on verra si tous les fournisseurs se ressemblent.

Clés à implémenter pour cette story

  • serviceProviderName
  • documents
  • url
  • contentSelector

Implement robots exclusion protocol (`robots.txt`)

Hi,

A question has just surfaced about the robots.txt files and whether CGUs project should honor them / keep track of the robots.txt status.

So far, the following script does the job of testing whether the URLs can be crawled by a robot (called "CGUs Bot"):

import json
import os
import requests
import urllib.robotparser

from urllib.parse import urljoin

count = 0
for file in sorted(os.listdir('services')):
    if not file.endswith('.json'):
        continue

    with open(os.path.join('services', file), 'r') as fh:
        service = json.load(fh)
        for document in service['documents']:
            count += 1

            document_url = service['documents'][document]['fetch']
            req = requests.head(document_url)
            if req.status_code == 301:
                print('WARNING: Terms URL is a redirection: %s to %s.' % (document_url, req.headers['Location']))
                robots_url = urljoin(req.headers['Location'], '/robots.txt')
            else:
                robots_url = urljoin(document_url, '/robots.txt')

            req = requests.get(robots_url, headers={'User-Agent': 'CGUs Bot'})
            if req.status_code != 200:
                print('WARNING: Status Code %s for %s.' % (req.status_code, robots_url))
                continue
            rp = urllib.robotparser.RobotFileParser()
            rp.parse(req.text.split())
            if not rp.entries:
                print('FAILED TO PARSE robots.txt for %s.' % robots_url)
            if not rp.can_fetch('CGUs Bot', document_url):
                print(document_url)
print('Scanned a total of %d terms URLs.' % count)

No issue detected at this time with the already existing services files.

Best,

Document declarations and filters versioning

We need to be able to regenerate versions from snapshots. As documents is aim to change over time (location or filters) we can't rely on the last version of the declaration to regenerate the version from an old snapshot. So we need a system to keep track of declaration changes, that's what we called declarations and filters versioning.

At this time, we see three solutions which have in common the following rules:

  • history is optional
  • the current valid declaration has no date and should be clearly identifiable
  • the valid_until date is an inclusive expiration date. It should be the exact authored date of the last snapshot commit for which the declaration is still valid.

Option 1: Add an history field in service declaration

In services/ASKfm.json:

{
  "name": "ASKfm",
  "documents": {
    "Terms of Service": {
      "fetch": "https://ask.fm/docs/terms_of_use/?lang=en",
      "select": ".selection",
      "filter": [ "add" ]
      "history": [
        {
          "fetch": "https://ask.fm/docs/terms_of_use/?lang=en",
          "select": "body",
          "filter": [ "add" ]
          "valid_until": "2020-08-24T14:02:39Z"
        },
        {
          "fetch": "https://ask.fm/docs/terms_of_use/?lang=en",
          "select": "body",
          "valid_until": "2020-08-23T14:02:39Z"
        }
      ]
    }
  }
}

Note: When no historisation is needed the file may have no mention of history.

Pros:

  • Everything is in the same file:
    • might prevent to forget to update existing history
    • might help user to know that history is a thing and encourage them to learn about it if they feel the need
    • no (pseudo-)hidden knowledge about history

Cons:

  • Apparent complexity can discourage new contributors
  • With time, the file can become huge

Option 2: Add an serviceId.history.json file

In services/ASKfm.json:

{
  "name": "ASKfm",
  "documents": {
    "Terms of Service": {
      "fetch": "https://ask.fm/docs/terms_of_use/?lang=en",
      "select": ".selection",
      "filter": [ "add" ]
    }
  }
}

In services/ASKfm.history.json:

{
  "name": "ASKfm",
  "documents": {
    "Terms of Service": [
      {
        "fetch": "https://ask.fm/docs/terms_of_use/?lang=en",
        "select": "body",
        "filter": [ "add" ]
        "valid_until": "2020-08-24T14:02:39Z"
      },
      {
        "fetch": "https://ask.fm/docs/terms_of_use/?lang=en",
        "select": "body",
        "valid_until": "2020-08-23T14:02:39Z"
      }
    ]
  }
}

Pros:

  • Service declaration stay small and simple
  • History file is kept close to the service declaration so users might see them

Cons:

  • Make the discovery of history capacities less easy
  • Increase the probability of forgetting to update history file when making a change in the service discovery

Option 3: Add an history service declaration file in services/history folder

In services/ASKfm.json:

{
  "name": "ASKfm",
  "documents": {
    "Terms of Service": {
      "fetch": "https://ask.fm/docs/terms_of_use/?lang=en",
      "select": ".selection",
      "filter": [ "add" ]
    }
  }
}

In services/history/ASKfm.json:

{
  "name": "ASKfm",
  "documents": {
    "Terms of Service": [
      {
        "fetch": "https://ask.fm/docs/terms_of_use/?lang=en",
        "select": "body",
        "filter": [ "add" ]
        "valid_until": "2020-08-24T14:02:39Z"
      },
      {
        "fetch": "https://ask.fm/docs/terms_of_use/?lang=en",
        "select": "body",
        "valid_until": "2020-08-23T14:02:39Z"
      }
    ]
  }
}

Pros:

  • Service declaration stay small and simple
  • All history updates are reserved to users with the knowledge that might work as gatekeepers

Cons:

  • All history updates are reserved to users with the knowledge that might work as gatekeepers :)
  • Need to rely on people with knowledge to keep the history

Some thoughts

Community

The choice might have implication on the community that will grow around the project.

Option 1 shows everything to everyone, it might frightened some contributors with some apparent complexity (once there are history in the declaration file), but it might also encourage them to learn about it if they want or feel the need to. All contributors will share the same view and knowledge about the system. This might encourage collaboration between them to learn and improve together.

Option 2 and Option 3 hide the complexity of history management in separate files and only most adventurous contributors will find them by themselves. Contribution to those files will probably be done by specific contributors that will be taught to manage those file. Thus creating two different kind of contributors: those who will stay with the basic service declaration, not knowing that more complex options exist, and those who will have the knowledge of history management whose work might stay in the shadow or work as gatekeeper.

[TikTok] [ToS/PP] Split document by jurisdiction

TikTok has a ToS and Privacy policy by jurisdiction consolidated in the same document, targetable with an ID selector (#terms-us, #terms-eea…). It could be good to split it by jurisdiction.

Bypass bot detectors

Hi,

Rakuten and Leboncoin have very strong bot detectors, hence preventing from automatically fetching their CGUs (at least on a regular OVH machine). See https://fr.shopping.rakuten.com/newhelp/conditions-generales/ or https://www.leboncoin.fr/dc/cgu. It is possible that #138 and having JS enabled will help here, but I think this won't be enough.

Best,

EDIT: Same for RueDuCommerce (see https://www.rueducommerce.fr/info/mentions-legales/cgv) or FNAC (https://www.fnac.com/Help/cgv-fnac#bl=footer), they all use the same system, powered by Datadome.

Import history from tosback.org

As discussed this morning with @michielbdejong and @Ndpnt: import the history of snapshots from https://github.com/tosdr/tosback2/tree/master/crawl.

For each document of each service declaration present after #58:

  • Get the oldest version in history from https://github.com/tosdr/tosback2/tree/master/crawl.
  • Add a wrapper such as <html><body> to transform the HTML fragment into an HTML document parseable by JSdom.
  • Store the text as a snapshot.
  • Trigger filtering.
  • Create a new version if appropriate.
  • Ensure the date of the commit is the one of the original commit in TOSback, not the current date.
  • Store reference to original commit in both snapshot and version commit messages, with a last line presenting a message such as This snapshot was imported from https://github.com/tosdr/tosback2/blob/5acac7abb5e967cfafd124a5e275f98f6ecd423e/crawl/4shared.com.

Open questions

  • Should the date of the commit that reflects the TOSback one be the commit date or the author date? I'd prefer to use the author date as I believe it would keep the graph ordered by date, but @Ndpnt argues this might break GitHub's view. A check should be done on whether GitHub presents author date or commit date in history view.

Timeout of `npm start`?

Hi!

We're now running this code (without scheduler) on https://tosback.org (went live today!) and one thing we had to change was to add a timeout, so the process exits after one hour.

Did you have a similar experience or does npm start neatly exit after it reaches the last service in the alphabet?

Validate minimum size after filtering

As discussed this morning with @michielbdejong and @Ndpnt:

Add a validation step in scripts/validation/validate.js to check that the resulting string after filtering a document is at least n characters long, where n is either arbitrary or computed once as a fraction of the minimum size of all currently tracked documents.

Lint code

Add ESlint to ease collaboration, importing code style preferences from existing @ambanum repositories.

Handling terms in PDF

Hi,

While looking at app stores terms and developer policies, I came across the ones from Apple App Store, which are only available as PDF. See https://developer.apple.com/terms/apple-developer-agreement/Apple-Developer-Agreement-French.pdf.

Not sure how common this is and whether it might be worth developing ad-hoc code to handle PDF files?

I'm thinking of basically running pdftotext / pdftohtml from popler-utils or something more elaborate such as https://www.npmjs.com/package/pdf2html. This should be a bit carefully designed to integrate nicely in the current CD setup.

Best,

Document common practice for choosing a service name

In order to go from 50 to 5,000 services, we need to have and understand our system for picking a canonical service name.

If you look at the filenames in https://github.com/ambanum/CGUs/tree/master/services most of the names are spelt and capitalized:

  1. the same way as on wikipedia
  2. Pascal case, but abbreviations in all-caps
  3. preceded by parent service if applicable
  4. just the commonly known name, no suffixes like ".com" or "Inc."
  5. with spaces and dots removed

Examples:
ask.fm (1) Ask.fm (2) Ask.FM (5) AskFM
https://www.apple.com/ios/app-store/ (1) App Store (3) Apple App Store (5) AppleAppStore
Facebook Payments Inc. (4) Facebook Payments (5) FacebookPayments
Foursquare City Guide (4) Foursquare
reddit.com (1) reddit (2) Reddit

Exceptions (fix these?):

  • GooglePlayStore instead of GooglePlay
  • LastFm instead of LastFM (inconsistent because we also have AskFM)
  • deviantART instead of DeviantArt
  • hi5 instead of Hi5

[StackOverflow] [PP] Track additional privacy policies

This statement should be read together with our related Privacy Notices, in particular our Privacy Notice for the Public Network, Privacy Notice for Stack Overflow Teams Basic, Privacy Notice for Stack Overflow Teams Business, Privacy Notice for Stack Overflow Teams Enterprise, Privacy Notice for Stack Overflow Talent and Jobs and our Employee Privacy Notice.

If you interact with us through the Public Network In addition, we will collect and process your personal information in accordance with the Stack Overflow Privacy Notice for the Public Network.
If you are a Stack Overflow for Teams, Basic customer In addition, we will collect and process your personal information in accordance with the Stack Overflow for Teams Basic Privacy Notice.
If you are a Stack Overflow for Teams, Business customer In addition, we will collect and process your personal information in accordance with the Stack Overflow for Teams, Business Privacy Notice and any other agreement that we may have with you.
If you are a Stack Overflow Talent customer This statement should be read together with our related Privacy Notices, in particular our Stack Overflow Talent Privacy Notice and any other agreement that we may have with you.
If you are a Stack Overflow for Teams, Enterprise customer In addition, we will collect and process your personal information in accordance with the Stack Overflow for Teams, Enterprise Privacy Notice and any other agreement that we may have with you.
If you visit or register with any of our websites We will collect and use your personal information in accordance with this Privacy Policy and the Privacy Notice for the Public website if applicable. In particular, we may see how you use our websites and what content you interact with and for how long. This may involve the use of cookies which is explained in our Cookie Policy.
If you are a supplier and you or your company provides us with goods or services We may collect your individual contact information in order to communicate with you and may use other information that we need in order to manage our account with your business. We will process such data in accordance with this Privacy Policy and any other agreement that we may have with you.
If you are employed by us We will process your personal information in accordance with our Employee Privacy Notice. This will be made available to you internally.

Tasks stay pending in tosback-import

Follow-up from #71. I think some tasks hit their 10 seconds timeout but then still keep running in the background. Script run ends with this and then hangs for a long time:

Next task (1 tasks left, running 5 in parallel)
Sologig Privacy Policy start
Sologig Privacy Policy skip
Saving services/Washingtonpost.json
Washingtonpost Privacy Policy done
Pending: [
  'Zimbio - Privacy Policy - http://www.livingly.com/privacy-policy/',
  'Urbanoutfitters - Privacy Policy - http://www.urbanoutfitters.com/urban/help/privacy_security.jsp',
  'Zynga - Privacy Policy - http://company.zynga.com/about/privacy-center/privacy-policy#sharing-information'
]
Could not fetch http://www.urbanoutfitters.com/urban/help/privacy_security.jsp
Urbanoutfitters Privacy Policy fail
Pending: [
  'Zimbio - Privacy Policy - http://www.livingly.com/privacy-policy/',
  'Zynga - Privacy Policy - http://company.zynga.com/about/privacy-center/privacy-policy#sharing-information'
]
Could not filter http://www.livingly.com/privacy-policy/
Zimbio Privacy Policy fail
Pending: [
  'Zynga - Privacy Policy - http://company.zynga.com/about/privacy-center/privacy-policy#sharing-information'
]
Could not fetch http://company.zynga.com/about/privacy-center/privacy-policy#sharing-information
Zynga Privacy Policy fail
Pending: []
Could not fetch http://tracking.quisma.com/policy.htm
Could not fetch http://www.providecommerce.com/privacy.aspx
Could not fetch http://www.seattletimescompany.com/notices/notice2.html
Could not filter http://www.washingtonpost.com/privacy-policy/2011/11/18/gIQASIiaiN_print.html

Déplacer l'historique dans un dépôt indépendant

  • Transférer les fichiers dans un autre dépôt
  • Ajouter le dépôt de données comme sous-module du dépôt de code

Faut-il également séparer le dépôt des données brutes du dépôt des données nettoyées ?

Only notify for a subset of services

Hi,

I would be interested in getting notifications only for a subset of services (especially with the upcoming inclusion of the tosback declarations).

I could handle it on my own, running a separate copy of CGUs, but would it this feature be interesting more broadly? Seems notification code is in https://github.com/ambanum/CGUs/blob/d979a3ac789e071933b8c2940a24b2a4e1bca8de/src/notifier/index.js but I am not very familiar with your SendInBlue setup for notifications.

Depending on your feedback, I can help writing patch / code on this :)

Best,

Wrong link for WhatsApp CGUs

This link https://www.whatsapp.com/legal/terms-of-service-eea was last updated in 2018

The one we should use seems to be this one : https://www.whatsapp.com/legal/updates/terms-of-service-eea

Git file lock exists

Hi,

When starting with an empty cgus-data git repository and running yarn start, I get

[Fri Jul 03 2020 12:05:40] [ERROR]  [Discord-Terms of service] Error: Error: Could not commit undefined for undefined (raw version) due to error: Error: fatal: Unable to create '/home/lverney/cgus/cgus-data/.git/index.lock': File exists.

Another git process seems to be running in this repository, e.g.
an editor opened by 'git commit'. Please make sure all processes
are terminated then try again. If it still fails, a git process
may have crashed in this repository earlier:
remove the file manually to continue.

    at file:///home/lverney/cgus/src/history/persistor.js:9:10
    at /home/lverney/cgus/node_modules/async/dist/async.js:1249:46
    at Array.forEach (<anonymous>)
    at trigger (/home/lverney/cgus/node_modules/async/dist/async.js:1249:27)
    at /home/lverney/cgus/node_modules/async/dist/async.js:1314:25
    at /home/lverney/cgus/node_modules/async/dist/async.js:321:20
    at invokeCallback (/home/lverney/cgus/node_modules/async/dist/async.js:179:13)
    at /home/lverney/cgus/node_modules/async/dist/async.js:173:13
    at processTicksAndRejections (internal/process/task_queues.js:97:5)

I did not yet dig much into this issue, sounds like a concurrent access to git repository in async promises.

Best,

Add translations to the "types" json file

In order to ease the process of adding a new service, I suggest we add a new attribute to types.json

  {
  "Terms of Service": {
    "commitment": {
      "writer": "service provider",
      "audience": "end user",
      "object": "end user’s service usage"
    },
    "translations": {
      "fr": [
        "Conditions de service",
        "Conditions générales d'utilisation"
      ]
    }
  },
  "Privacy Policy": {
    "commitment": {
      "writer": "service provider",
      "audience": "end user",
      "object": "end user’s personal data"
    },
    "translations": {
      "fr": [
        "Politique de confidentialité"
      ]
    }
  },
  },```

This way, contributors will easily find the right document name

What do you think?

Track en-IE versions of documents

Hi,

So far, we are tracking the en-GB versions of the documents. However, Brexit might affect these versions, which might no longer be aligned with European reglementation.

See for instance this move by Facebook : https://www.ft.com/content/bde9c983-bfbd-4bb8-a524-2e93facfe36b. Few days ago, a change in the Microsoft policies (OpenTermsArchive/contrib-versions@151d6e2) with no other changes than changing from british english to american english tends as well to indicate that UK terms might get aligned on US terms soon.

I understand we want to track English versions of documents as main elements, to ease comparison with foreign services and prevents translations artifacts. I suggest we move from en-GB to en-IE (Ireland) as this last one is ensured to be in English and under European law.

Best,

fetch error bubbles up to the top

I tried npm start but after a couple of hours of good work, it suddenly exited:

2020-11-19 09:24:24 error Debka — Privacy Policy                                  FetchError: Invalid response body while trying to fetch https://www.debka.com/tac/: incorrect header check
    at Gunzip.<anonymous> (/home/tosback3/tosback-crawler/node_modules/node-fetch/lib/index.js:399:12)
    at Gunzip.emit (events.js:326:22)
    at emitErrorNT (internal/streams/destroy.js:106:8)
    at emitErrorCloseNT (internal/streams/destroy.js:74:3)
    at processTicksAndRejections (internal/process/task_queues.js:80:21)

/home/tosback3/tosback-crawler/node_modules/async/dist/async.js:181
            setImmediate$1(e => { throw e }, err);
                                  ^
FetchError: Invalid response body while trying to fetch https://www.debka.com/tac/: incorrect header check
    at Gunzip.<anonymous> (/home/tosback3/tosback-crawler/node_modules/node-fetch/lib/index.js:399:12)
    at Gunzip.emit (events.js:326:22)
    at emitErrorNT (internal/streams/destroy.js:106:8)
    at emitErrorCloseNT (internal/streams/destroy.js:74:3)
    at processTicksAndRejections (internal/process/task_queues.js:80:21) {
  type: 'system',
  errno: 'Z_DATA_ERROR',
  code: 'Z_DATA_ERROR'
}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] start: `node src/index.js`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/tosback3/.npm/_logs/2020-11-19T14_24_24_862Z-debug.log

That should never happen, right? It should just catch that error and continue with its next task?

Follow a multi-page document

It would be really interesting to be able to follow documents split into several pages.
For exemple, look at the community guidelines of Facebook

So, the fetch attribute could take several values

"Terms of Service": {
  "fetch": {
  	"https://www...",
  	"https://www...",
  	"https://www...",
  	"https://www...",
  }
}

but we might need to define a different document property on each page

"Terms of Service": {
  "fetch": {
  	{
	  	"url:" "https://www...",
	  	"select": ".target",
	  	"remove": ".child",
	  	"executeClientScripts": "true"
  	},
  	{
	  	"url:" "https://www...",
	  	"select": "#container",
	  	"filter": "removeIdFromParent"
  	},
  }
}

Silent errors when request fails

Hi,

When the request fails for some reason (in my particular case, a filtering proxy returning an unauthorized error), the error is silenced in the logs. See:

lverney@peren-app-jupyter:~/cgus$ npm start "Airbnb"

> [email protected] start /home/lverney/cgus
> node src/index.js "Airbnb"

(node:2946) ExperimentalWarning: The ESM module loader is experimental.
2020-10-09 14:57:36 info                                                          Refiltering 3 documents from 1 services…
2020-10-09 14:57:36 info                                                          Refiltered 3 documents from 1 services.

2020-10-09 14:57:36 info                                                          Start tracking changes of 3 documents from 1 services…
2020-10-09 14:57:36 info                                                          Tracked changes of 3 documents from 1 services.
lverney@peren-app-jupyter:~/cgus$

(the "hint" to notice it is the absence of explicit "Recorded version" log, but it would probably be better to explicitly have a warning log on request returning anything else than a 200 HTTP code)

Google l10n and i18n

Hi,

Apparently Google has two query parameters for l10n and i18n, namely hl (for locale) and gl (for country).

There was issues recently with rapid and unexpected switching between two geographical regions on Google terms (possibly due to IP geolocation or something like this). This seems solved at the moment, so just noting this here in case the issue comes back.

It might be solved by passing something such as gl=ie in the URL.

As an example, https://play.google.com/store/apps/details?id=com.hulu.plus&hl=fr&gl=us and https://play.google.com/store/apps/details?id=com.hulu.plus&hl=fr&gl=fr makes the distinction between these parameters clean in their footer.

Best,

Import rules from tosback.org

As discussed this morning with @michielbdejong and @Ndpnt:

  • Import rules from https://github.com/tosdr/tosback2 by converting them into service declarations.
  • Validate each imported service declaration with scripts/validation/validate.js.
    • Drop (or fix) service declarations that fail validation.
  • Convert Xpath selectors into CSS selectors.
    • If an Xpath expression is not convertible into a CSS selector, find a CSS selector manually.
    • If there are too many inconvertible Xpath selectors, add support for the xpathContentSelector key in service declarations, which would rely on document.evaluate supposedly exposed by JSdom.
  • Add a metadata entry at the root of the service declaration: "importedFrom": "https://github.com/tosdr/tosback2/blob/5acac7abb5e967cfafd124a5e275f98f6ecd423e/rules/4shared.com.xml". Make sure to use stable URLs (with a commit ID) and not a branch reference 🙂 Adding this key will require changing the validation schema.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.