Coder Social home page Coder Social logo

the-robot / sqliv Goto Github PK

View Code? Open in Web Editor NEW
1.1K 62.0 378.0 893 KB

massive SQL injection vulnerability scanner

Home Page: https://en.kali.tools/all/?tool=1334

License: GNU General Public License v3.0

Python 99.44% Dockerfile 0.56%
sqli sqli-vulnerability-scanner sql-injection scanning scanner crawler reverse-ip-scan multiprocessing

sqliv's Introduction

SQLiv

Massive SQL injection scanner

Features

  1. multiple domain scanning with SQL injection dork by Bing, Google, or Yahoo
  2. targetted scanning by providing specific domain (with crawling)
  3. reverse domain scanning

both SQLi scanning and domain info checking are done in multiprocessing
so the script is super fast at scanning many urls

quick tutorial & screenshots are shown at the bottom
project contribution tips at the bottom


Installation

  1. git clone https://github.com/the-robot/sqliv.git
  2. sudo python2 setup.py -i

Dependencies

Pre-installed Systems


Quick Tutorial

1. Multiple domain scanning with SQLi dork

  • it simply search multiple websites from given dork and scan the results one by one
python sqliv.py -d <SQLI DORK> -e <SEARCH ENGINE>  
python sqliv.py -d "inurl:index.php?id=" -e google  

2. Targetted scanning

  • can provide only domain name or specifc url with query params
  • if only domain name is provided, it will crawl and get urls with query
  • then scan the urls one by one
python sqliv.py -t <URL>  
python sqliv.py -t www.example.com  
python sqliv.py -t www.example.com/index.php?id=1  

3. Reverse domain and scanning

  • do reverse domain and look for websites that hosted on same server as target url
python sqliv.py -t <URL> -r

4. Dumping scanned result

  • you can dump the scanned results as json by giving this argument
python sqliv.py -d <SQLI DORK> -e <SEARCH ENGINE> -o result.json

View help

python sqliv.py --help

usage: sqliv.py [-h] [-d D] [-e E] [-p P] [-t T] [-r]

optional arguments:
  -h, --help  show this help message and exit
  -d D        SQL injection dork
  -e E        search engine [Google only for now]
  -p P        number of websites to look for in search engine
  -t T        scan target website
  -r          reverse domain

screenshots

1 2 3 4


Development

TODO

  1. POST form SQLi vulnerability testing

sqliv's People

Contributors

phgalick avatar the-robot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sqliv's Issues

Windows platform is not supported for installation - How to Fix?

Running setup.py install for MarkupSafe ... done
Running setup.py install for sphinx-better-theme ... done
Running setup.py install for nyawc ... done
Successfully installed Jinja2-2.10 MarkupSafe-1.0 Pygments-2.2.0 alabaster-0.7.11 babel-2.6.0 docutils-0.14 imagesize-1.0.0 lxml-4.0.0 nyawc-1.8.1 pockets-0.6.2 pytz-2018.5 requests-toolbelt-0.8.0 snowballstemmer-1.2.1 sphinx-1.5.5 sphinx-better-theme-0.1.5 sphinxcontrib-napoleon-0.6.1
Windows platform is not supported for installation

module problem

oot@xxxx:~/Desktop/sqliv# python sqliv.py -d "inurl:index.php?id=" -e google
Traceback (most recent call last):
File "sqliv.py", line 13, in
from src.crawler import crawler
File "/root/Desktop/sqliv/src/crawler.py", line 5, in
from nyawc.Options import Options
ImportError: No module named nyawc.Options

SSL Certificate Error

Checking the website whether it's online or not. Traceback (most recent call last): File "sqliscan.py", line 214, in <module> SqliScan(urls) File "sqliscan.py", line 64, in __init__ self.urlRequest(self.urlReader(urls)) File "sqliscan.py", line 98, in urlRequest if self.siteStatus(url): # True mean website is online File "sqliscan.py", line 108, in siteStatus urllib2.urlopen(request) File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen return opener.open(url, data, timeout) File "/usr/lib/python2.7/urllib2.py", line 429, in open response = self._open(req, data) File "/usr/lib/python2.7/urllib2.py", line 447, in _open '_open', req) File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain result = func(*args) File "/usr/lib/python2.7/urllib2.py", line 1241, in https_open context=self._context) File "/usr/lib/python2.7/urllib2.py", line 1195, in do_open h.request(req.get_method(), req.get_selector(), req.data, headers) File "/usr/lib/python2.7/httplib.py", line 1057, in request self._send_request(method, url, body, headers) File "/usr/lib/python2.7/httplib.py", line 1097, in _send_request self.endheaders(body) File "/usr/lib/python2.7/httplib.py", line 1053, in endheaders self._send_output(message_body) File "/usr/lib/python2.7/httplib.py", line 897, in _send_output self.send(msg) File "/usr/lib/python2.7/httplib.py", line 859, in send self.connect() File "/usr/lib/python2.7/httplib.py", line 1278, in connect server_hostname=server_hostname) File "/usr/lib/python2.7/ssl.py", line 353, in wrap_socket _context=self) File "/usr/lib/python2.7/ssl.py", line 601, in __init__ self.do_handshake() File "/usr/lib/python2.7/ssl.py", line 838, in do_handshake match_hostname(self.getpeercert(), self.server_hostname) File "/usr/lib/python2.7/ssl.py", line 272, in match_hostname % (hostname, ', '.join(map(repr, dnsnames)))) ssl.CertificateError: hostname 'megabox.me' doesn't match either of '*.zbigz.com', 'zbigz.com'

Output options

What about to allow user just to get vulnerable sites in JSON format?

Can not run the script

Hello,

I can not run the script. I am not a Pythin person.

The error is -

#python sqliv.py -e google -t test.com

Traceback (most recent call last):
  File "sqliv.py", line 158, in <module>
    vulnerables = singleScan(args.target)
  File "sqliv.py", line 50, in singleScan
    io.stdout("crawling {}".format(url))
ValueError: zero length field name in format

Add output format

Please, add output format, xml or json Thank your for attention :D

Reverse ip fails

URL error, [Errno 111] Connection refused
Traceback (most recent call last):
  File "sqliv.py", line 117, in <module>
    domains = reverseip.reverseip(args.target)
  File "/root/sqliv/src/reverseip.py", line 43, in reverseip
    obj = json.loads(result)
UnboundLocalError: local variable 'result' referenced before assignment

That's because reverse ip site (domains.yougetsignal.com) gives a Forbidden error.

And again: ValueError: dictionary update sequence element #0 has length 1; 2 is required

Checking the website whether it's online or not.
Connected, URL is valid.
Traceback (most recent call last):
File "sqliscan.py", line 214, in
SqliScan(urls)
File "sqliscan.py", line 64, in init
self.urlRequest(self.urlReader(urls))
File "sqliscan.py", line 99, in urlRequest
self.scanVulnerability(site)
File "sqliscan.py", line 128, in scanVulnerability
parms = dict([item.split("=") for item in parsedUrl[4].split("&")])
ValueError: dictionary update sequence element #0 has length 1; 2 is required

File "sqliv.py", line 13, in <module> from src.crawler import Crawler ETC.

root@kali:~/sqliv# python sqliv.py -d "inurl:index.php?id=" -e google Traceback (most recent call last): File "sqliv.py", line 13, in <module> from src.crawler import Crawler File "/root/sqliv/src/crawler.py", line 5, in <module> from nyawc.Options import Options ImportError: No module named nyawc.Options

I am using the latest kali linux rolling edition.Any tips? Watch here -> https://www.youtube.com/watch?v=P0ms5y5NmxM&feature=youtu.be

MySQL and MariaDB

These are the same things, specifying one as another is a very bad mistake, implement all MariaDB checks into MySQL

[Feature] Progress info

Needs progress info in case there are a lot of links to scan.
For example

[MSG] [XX:XX:XX] {5/10} scanning .....
[MSG] [XX:XX:XX] {6/10} scanning .....
[MSG] [XX:XX:XX] {7/10} scanning .....

Installation Error

I'm trying to install sqliv however getting this error, Please advise....

root@kali:/Desktop/Newtool/sqliv# pip install -r requirements.txt
Collecting bs4 (from -r requirements.txt (line 1))
Downloading bs4-0.0.1.tar.gz
Collecting termcolor (from -r requirements.txt (line 2))
Downloading termcolor-1.1.0.tar.gz
Collecting terminaltables (from -r requirements.txt (line 3))
Downloading terminaltables-3.1.0.tar.gz
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python2.7/dist-packages/beautifulsoup4-4.4.1-py2.7.egg (from bs4->-r requirements.txt (line 1))
Building wheels for collected packages: bs4, termcolor, terminaltables
Running setup.py bdist_wheel for bs4 ... done
Stored in directory: /root/.cache/pip/wheels/84/67/d4/9e09d9d5adede2ee1c7b7e8775ba3fbb04d07c4f946f0e4f11
Running setup.py bdist_wheel for termcolor ... done
Stored in directory: /root/.cache/pip/wheels/de/f7/bf/1bcac7bf30549e6a4957382e2ecab04c88e513117207067b03
Running setup.py bdist_wheel for terminaltables ... done
Stored in directory: /root/.cache/pip/wheels/96/0c/9a/0ec2bcad2ac1fb1d0e4695879386460cec1a947cd7413d1b17
Successfully built bs4 termcolor terminaltables
Installing collected packages: bs4, termcolor, terminaltables
Successfully installed bs4-0.0.1 termcolor-1.1.0 terminaltables-3.1.0
root@kali:
/Desktop/Newtool/sqliv# clear

root@kali:/Desktop/Newtool/sqliv# ls
libs README.md screenshots sqliv.py
LICENSE requirements.txt setup.py src
root@kali:
/Desktop/Newtool/sqliv# python2 setup.py -i
Requirement already satisfied: bs4 in /usr/local/lib/python2.7/dist-packages
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python2.7/dist-packages/beautifulsoup4-4.4.1-py2.7.egg (from bs4)
Requirement already satisfied: termcolor in /usr/local/lib/python2.7/dist-packages
Requirement already satisfied: terminaltables in /usr/local/lib/python2.7/dist-packages
Installation finished
Files are installed under /usr/share/sqliv
Run: sqliv --help
root@kali:~/Desktop/Newtool/sqliv# sqliv
Traceback (most recent call last):
File "/usr/share/sqliv/sqliv.py", line 12, in
from src import search
File "/usr/share/sqliv/src/search.py", line 6, in
from libs import google
File "/usr/share/sqliv/libs/google.py", line 48, in
from bs4 import BeautifulSoup
File "build/bdist.linux-x86_64/egg/bs4/init.py", line 30, in

File "build/bdist.linux-x86_64/egg/bs4/builder/init.py", line 314, in

File "build/bdist.linux-x86_64/egg/bs4/builder/_html5lib.py", line 70, in
AttributeError: 'module' object has no attribute '_base'
root@kali:~/Desktop/Newtool/sqliv# sqliv
Traceback (most recent call last):
File "/usr/share/sqliv/sqliv.py", line 12, in
from src import search
File "/usr/share/sqliv/src/search.py", line 6, in
from libs import google
File "/usr/share/sqliv/libs/google.py", line 48, in
from bs4 import BeautifulSoup
File "build/bdist.linux-x86_64/egg/bs4/init.py", line 30, in

File "build/bdist.linux-x86_64/egg/bs4/builder/init.py", line 314, in

File "build/bdist.linux-x86_64/egg/bs4/builder/_html5lib.py", line 70, in
AttributeError: 'module' object has no attribute '_base'

the line 117 of setup.py where it prints a wrong path

When install sqliv with command
sudo python2 setup.py -i
Suddenly broken network,reinstall sqliv with command
sudo python2 setup.py -i
it will get error: executable file exists under /usr/share/sqliv.
I solved it with command
sudo rm -rf /usr/share/sqliv
sudo rm -rf /usr/bin/sqliv
then reinstall.It's ok.

Invalid Option

After followed your tutor, i got something like

Invalid Option

when runggin : python sqliscan.py

anything wrong ?

N.Y.A.W.C for request (GET,POST,PUT,...) crawling

Hi,

I saw that the scanner still needs support for crawling POST requests in e.g. forms. I worked on a library (N.Y.A.W.C) that does exactly that.

It can scan a single URL for requests or it can scan an entire website for requests. While crawling it provides callbacks so you can view/use the requests it found.

I thought you may like it for SQLiv. I'm happy to help if you want to implement it.

too many arguments error

from: too many arguments./setup.py:
line 10: author: command not found./setup.py:
line 11: email: command not found./setup.py:
line 12: license: command not found./setup.py:
line 13: version: command not found./setup.py:
line 17: FILE_PATH_LINUX: command not found./setup.py:
line 18: EXEC_PATH_LINUX: command not found./setup.py:
line 20: FILE_PATH_MAC: command not found./setup.py:
line 21: EXEC_PATH_MAC: command not found./setup.py:
line 24: syntax error near unexpected token ('./setup.py: line 24: def metadata():'

how do i correct this

Split URL by &amp

The &amp; in the scrapes URL’s is not a valid schema of the URL, google uses this schema for their webcached URL’s and as a redirect. When scraping for vulnerabilities it can cause a SQL error because the schema is not apart of the website and this will inturn reflect a false positive. Suggestion is to split the URL’s after scraping by the &amp; query.

ImportError: No module named nyawc.Options

Installed from git using git clone and install method listed.

root@kali:~/Tools# git clone https://github.com/Hadesy2k/sqliv.git && cd sqliv && sudo python2 setup.py -i
Cloning into 'sqliv'...
remote: Counting objects: 825, done.
remote: Total 825 (delta 0), reused 0 (delta 0), pack-reused 825
Receiving objects: 100% (825/825), 893.87 KiB | 169.00 KiB/s, done.
Resolving deltas: 100% (462/462), done.
Collecting bs4
  Downloading bs4-0.0.1.tar.gz
Requirement already satisfied: beautifulsoup4 in /usr/lib/python2.7/dist-packages (from bs4)
Building wheels for collected packages: bs4
  Running setup.py bdist_wheel for bs4 ... done
  Stored in directory: /root/.cache/pip/wheels/84/67/d4/9e09d9d5adede2ee1c7b7e8775ba3fbb04d07c4f946f0e4f11
Successfully built bs4
Installing collected packages: bs4
Successfully installed bs4-0.0.1
Collecting termcolor
  Downloading termcolor-1.1.0.tar.gz
Building wheels for collected packages: termcolor
  Running setup.py bdist_wheel for termcolor ... done
  Stored in directory: /root/.cache/pip/wheels/de/f7/bf/1bcac7bf30549e6a4957382e2ecab04c88e513117207067b03
Successfully built termcolor
Installing collected packages: termcolor
Successfully installed termcolor-1.1.0
Collecting terminaltables
  Downloading terminaltables-3.1.0.tar.gz
Building wheels for collected packages: terminaltables
  Running setup.py bdist_wheel for terminaltables ... done
  Stored in directory: /root/.cache/pip/wheels/96/0c/9a/0ec2bcad2ac1fb1d0e4695879386460cec1a947cd7413d1b17
Successfully built terminaltables
Installing collected packages: terminaltables
Successfully installed terminaltables-3.1.0
Collecting nyawc
  Downloading nyawc-1.8.0.tar.gz
Requirement already satisfied: beautifulsoup4==4.6.0 in /usr/lib/python2.7/dist-packages (from nyawc)
Collecting lxml==4.0.0 (from nyawc)
  Downloading lxml-4.0.0-cp27-cp27mu-manylinux1_x86_64.whl (5.3MB)
    100% |████████████████████████████████| 5.3MB 204kB/s 
Requirement already satisfied: requests==2.18.4 in /usr/lib/python2.7/dist-packages (from nyawc)
Collecting requests_toolbelt==0.8.0 (from nyawc)
  Downloading requests_toolbelt-0.8.0-py2.py3-none-any.whl (54kB)
    100% |████████████████████████████████| 61kB 797kB/s 
Collecting sphinx-better-theme==0.13 (from nyawc)
  Could not find a version that satisfies the requirement sphinx-better-theme==0.13 (from nyawc) (from versions: 0.1, 0.1.4, 0.1.5)
No matching distribution found for sphinx-better-theme==0.13 (from nyawc)
Installation finished
Files are installed under /usr/share/sqliv
Run: sqliv --help
root@kali:~/Tools/sqliv# ls
Dockerfile            lib      README.md         screenshots  sqliv.py
Dockerfile_README.md  LICENSE  requirements.txt  setup.py     src

root@kali:~/Tools/sqliv# python sqliv.py
Traceback (most recent call last):
  File "sqliv.py", line 13, in <module>
    from src.crawler import Crawler
  File "/root/Tools/sqliv/src/crawler.py", line 5, in <module>
    from nyawc.Options import Options
ImportError: No module named nyawc.Options

Service Unreachable

[MSG] [23:26:47] searching for websites with given dork
[503] Service Unreachable

SQLi checks probably won't work as expected

You are currently checking:

'SQL syntax error': "error in your SQL syntax",
'Query failed': "Query failed",
'Bad argument': "supplied argument is not a valid MySQL result resource in",
'JET DBE error': "Microsoft JET Database Engine error '80040e14'",
'Unknown error': "Error:unknown",
'Fatal error': "Fatal error",
'MySQL fetch': "mysql_fetch",
'Syntax error': "Syntax error"

So you are checking 2 maybe 3 different types of DBMS's, what about PostgreSQL? Or SQLite?

Let's go through these really quick:

'SQL syntax error': "error in your SQL syntax",  # mySQL
'Query failed': "Query failed",  # possibly mySQL (not really a good test because a lot of queries can fail)
'Bad argument': "supplied argument is not a valid MySQL result resource in",  # once again possibly mySQL, still not good because a bad argument does not prove SQLi
'JET DBE error': "Microsoft JET Database Engine error '80040e14'",  # no idea what you're trying to find here maybe MsAccess?
'Unknown error': "Error:unknown",  # not a good check, an unknown error would not be a SQLi point
'Fatal error': "Fatal error",  # same as above
'MySQL fetch': "mysql_fetch",  # fetch array is not a good indicator of SQLi because it expects a boolean
'Syntax error': "Syntax error"  # this could work for MySQL

So now that we've gone through that you have two good checks for SQL injection. The rest are a bad check that will probably provide a false positive. My suggestion is to use something like this:

# I'll give you two of each one, you can figure out the rest
"MySQL": (r"SQL syntax.*MySQL", r"Warning.*mysql_.*"),
"PostgreSQL": (r"PostgreSQL.*ERROR", r"Warning.*\Wpg_.*"),
"Microsoft SQL Server": (r"OLE DB.* SQL Server", r"(\W|\A)SQL Server.*Driver"),
"Microsoft Access": (r"Microsoft Access Driver", r"Access Database Engine"),
"Oracle": (r"\bORA-[0-9][0-9][0-9][0-9]", r"Oracle error"),
"IBM DB2": (r"CLI Driver.*DB2", r"DB2 SQL error"),
"SQLite": (r"SQLite/JDBCDriver",r"System.Data.SQLite.SQLiteException"),
"Sybase": (r"(?i)Warning.*sybase.*", None),

Not only are these good checks, but it's multiple database types and provides an easy extraction for what DBMS you are scanning.

Now lets get into your injection syntax, using ' may provide an error. But that's not enough. Why would you just assume something is vulnerable from a single thrown error? What if the vulnerability lays inside of )))/**/AND/**/1=1/**/((( instead of a basic boolean check with '. I suggest you use something a little more exciting then adding ' to each parameter, use a couple of syntax checks at each parameter instead of a single check and calling it good.

For example on a site like this: www.vuln.com/index.php?id=10&page=10 just because query id=10' produces an error, doesn't mean query page=10 is also going to produce the same error. Take it slow with the SQLi checks and check each query one at a time, otherwise you will skip vulnerable sites.

SyntaxError: Missing parentheses line 36

File "sqliv.py", line 36
print "" # move carriage return to newline
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print("" # move carriage return to newline)?

docker build fails

The docker build fails:

# docker build -t sqliv .
[...]
Step 5/7 : RUN pip install -r requirements.txt && python setup.py -i
 ---> Running in 9c8eb9c2a873
Collecting bs4 (from -r requirements.txt (line 1))
  Downloading bs4-0.0.1.tar.gz
Collecting termcolor (from -r requirements.txt (line 2))
  Downloading termcolor-1.1.0.tar.gz
Collecting terminaltables (from -r requirements.txt (line 3))
  Downloading terminaltables-3.1.0.tar.gz
Collecting nyawc (from -r requirements.txt (line 4))
  Downloading nyawc-1.8.0.tar.gz
Collecting beautifulsoup4 (from bs4->-r requirements.txt (line 1))
  Downloading beautifulsoup4-4.6.0-py2-none-any.whl (86kB)
Collecting lxml==4.0.0 (from nyawc->-r requirements.txt (line 4))
  Downloading lxml-4.0.0-cp27-cp27mu-manylinux1_x86_64.whl (5.3MB)
Collecting requests==2.18.4 (from nyawc->-r requirements.txt (line 4))
  Downloading requests-2.18.4-py2.py3-none-any.whl (88kB)
Collecting requests_toolbelt==0.8.0 (from nyawc->-r requirements.txt (line 4))
  Downloading requests_toolbelt-0.8.0-py2.py3-none-any.whl (54kB)
Collecting sphinx==1.5.5 (from nyawc->-r requirements.txt (line 4))
  Downloading Sphinx-1.5.5-py2.py3-none-any.whl (1.6MB)
Collecting sphinx-better-theme==0.13 (from nyawc->-r requirements.txt (line 4))
  Could not find a version that satisfies the requirement sphinx-better-theme==0.13 (from nyawc->-r requirements.txt (line 4)) (from versions: 0.1, 0.1.4, 0.1.5)
No matching distribution found for sphinx-better-theme==0.13 (from nyawc->-r requirements.txt (line 4))
The command '/bin/sh -c pip install -r requirements.txt && python setup.py -i' returned a non-zero code: 1

OSError in termux

python2 sqliv.py -d inurl:article.php?id= -e google [MSG] [22:33:14] searching for websites with given dork
[MSG] [22:33:18] 10 websites found
Traceback (most recent call last):
File "sqliv.py", line 94, in
vulnerables = scanner.scan(websites)
File "/data/data/com.termux/files/home/sqliv/src/scanner.py", line 22, in scan
pool = multiprocessing.Pool(max_processes, init)
File "/data/data/com.termux/files/usr/lib/python2.7/multiprocessing/init.py", line 232, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild)
File "/data/data/com.termux/files/usr/lib/python2.7/multiprocessing/pool.py", line 140, in init
self._setup_queues()
File "/data/data/com.termux/files/usr/lib/python2.7/multiprocessing/pool.py", line 236, in _setup_queues
self._inqueue = SimpleQueue()
File "/data/data/com.termux/files/usr/lib/python2.7/multiprocessing/queues.py", line 352, in init
self._rlock = Lock()
File "/data/data/com.termux/files/usr/lib/python2.7/multiprocessing/synchronize.py", line 147, in init
SemLock.init(self, SEMAPHORE, 1, 1)
File "/data/data/com.termux/files/usr/lib/python2.7/multiprocessing/synchronize.py", line 75, in init
sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue)
OSError: [Errno 38] Function not implemented

Don't work crawler

Whichever domain I specify, crawler is outputs: "found no suitable urls to test SQLi"

crawler.py throws TypeError 'bool' object is not iterable

Hey, I tried to use your crawler to check my webserver which has several domains on it against sqli.
Sadly, after checking some of the websites, it throws an error in line 20 of the Crawler.py

Traceback (most recent call last) :
File "sqliv.py", line 159, in
vulnerables = singleScan (args.target)
File "sqliv", line 52, in singleScan
urls = crawler.crawl(url)
File "/home/xxx/Desktop/sqliv/src/crawler.py", line 20, in crawl
result, URL = html.getHTML(url, True)
TypeError: 'bool' object is not iterable.

As this was for a -r option scan I tried to do the same for the same single domain and again it throws the same error in line 20 of Crawler.py
Sadly working with Google and stackoverflow I couldn't solve the problem.
Guess I found a site with vulnerability there but
still the error stops me of scanning the other websites.

Operating system is Parrot OS v 3.7
Python Version is 2.7.13

[503] Service Unreachable

hi, there

Could anyone advise about this issue??

root@kali:~/Desktop/Tool/sqliv# python sqliv.py -d "inurl:index.php?id=" -e google
[MSG] [03:40:42] searching for websites with given dork
[503] Service Unreachable

Appreciate you'r help.

suggestion adding new filter

could you add filter called site to filter output urls with specific wanted domain, like .com OR .org OR .info

I think this would be great if we include +site: dork alongside with other dorks in the same search

BR

ValueError: dictionary update sequence element #0 has length 1; 2 is required

Used Mode:
2

Used Dork:
league.php?id=

Full Log: Website Information

Domain Name : github.com
Protocol : https
Path : /paquettg/leaguewrap/blob/master/src/LeagueWrap/Api/League.php
Query[s] :

Checking the Website whether it's up or down.
[+] Connected, URL is valid.

Traceback (most recent call last):
File "sqlivul.py", line 246, in
main()
File "sqlivul.py", line 89, in init
self.urlreq(self.readfile())
File "sqlivul.py", line 124, in urlreq
self.scanurl(site)
File "sqlivul.py", line 206, in scanurl
parms = dict([item.split("=") for item in parsed_url[4].split("&")])
ValueError: dictionary update sequence element #0 has length 1; 2 is required

[503] Service Unreachable

root@localhost:~/sqliv# python sqliv.py -d "inurl:index.php?id=" -e google [MSG] [20:02:05] searching for websites with given dork [503] Service Unreachable

I am running Ubuntu 14.04.1 LTS, shouldn't be the network problem, it works on debian under the same network. Only google is not working, is it possible to debug it? Or make it more verbose?

Feedback: Bing.com/Yandex.ru

Hey Hadesy2k, is there an possibility to add bing.com & yandex.ru as other Search Engines beside Google? Google isn't the best choice because of the limitation of Search Querys.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.