gradiuscypher / bounty_tools Goto Github PK
View Code? Open in Web Editor NEWVarious tools for managing bug bounty recon and exploration.
License: MIT License
Various tools for managing bug bounty recon and exploration.
License: MIT License
When printing errors, the status message remains truncated.
Script should log actions taken to a file which gets downloaded before the droplet is destroyed.
All actions taken should be logged. Will help in debugging automation.
Have a rolling window for hosts to be checked against. For example:
Limit the host check within the last 7 days, that way if we filter our Elasticsearch queries to 7 days, we'll know that we have the most up to date hosts for that period. Duplicate hosts may exist outside of that period. We can also track historical changes to hosts, or when they are/are not present.
There's way too much text when doing recon/import work. Find a way to suppress stdout unless requested.
Command line switches are a bit messy right now. Need to clean them up. Use this as a chance to do general cleanup as well.
*.db files are too messy in the root dir. Make a directory to copy dbs too and rework all code to write/read from there.
The DO VMs take quite a long time to spin up and become usable. Figure out how to improve this speed. This may include setting up regional VM images in DigitalOcean as well.
Design the manager API to be easily used by a web API, as well as a command line tool.
These features should be included in the centralized server:
This is the task for integrating recon-ng into both the Docker image, Scanner REST API, and Server data persistence and interaction.
Need to automatically configure scanners after they've been created on DigitalOcean. Configuration includes installing all tools, and configuring the Scanner API
Sometimes when an automation loop is started with distribution, one of the workers automatically fails into a "DONE" state claiming the work queue is empty. Eg:
(venv3) gradius@ubuntu:~/github/bounty_tools$ ./bounty_tools.py --bulkrecon --hostjson target_hosts.json --distribute 3 --reconng
Required arguments not passed. Need either --createvm or --droplet to execute, along with --workspace and --domains
DONE
48720710 Grabbing work...{}
48720711 Grabbing work...{}
Done working...
48720710 Grabbing work...{}
When the script attempts to download results, it fails as the recon was never actually ran.
Another check mark for reasons why this flag system needs to be reworked.
I believe using a universal API for both central management and data persistence/access will be a win.
How will alerting work? Which platforms will it leverage? Task + implementation breakdown.
Build a docker image to be deployed on Digital Ocean with recon-ng. This will be a first pass to see how running recon tools in a Docker image on Digital Ocean will work. In the future, other tools may be added.
Improve the documentation and help make the tool easy to understand + use.
Edit: to be done after rewrite, to prevent doubling of work.
The idea behind this new set of bounty tools is such: A set of images that can be deployed in the cloud or locally that can run various bug bounty tools. Each image runs on a Scanner instance that is controlled by a REST API. This task will be to document data flow, data storage, etc.
A central Server controls all scanners and contains all historical data, as well as manages things like scheduling scans, sending alerts, enriching data, etc.
A user interface will control the central server. Ideally this will be a web app, but will start with CLI tools initially.
For each IP address we have, we should get as much identifying information on it as possible. Things like whois, location, ownership, etc.
This can then be filled in to Elasticsearch. First we should attempt an update approach so that we can work in batches rather than during the recon cycle. If this approach does not work, we can integrate it into the recon cycle before indexing a host.
This sort of provisioning might be better than doing paramiko+shellscript for initial configuration since a ton of lost time during setup is spent waiting for SSH/box setup, which is a dirty hack.
Use Censys.io to determine open ports.
Should add to DBs the same way that Shodan would add ports.
Eg: In Elastic, add a doc_type called port and link to host via _id.
Part of this design requires a REST API to interact with the tools on the Scanner Docker images. For each tool you should be able to do the following:
Other functionality might be:
For example, I want to be able to see all of the things related to a target: hosts, ports, hostnames, etc. We should persist data even if it's no longer live, but mark it as such.
I want to be able to exclude/include specific workspaces
Take the current code and migrate to a more segmented approach. Current using a single argparse object causes a naming collision between switches.
Core focus should be around ease of use and development.
Things to address:
Name collision between arguments
Separation of duties, eg. plugins should also handle importing their data to DBs
Updated documentation, inline and otherwise.
Better consideration and plan for private plugins and how to properly import them. Possibly consider dynamic importing of plugins.
There's not really a reason to continue including the complexity of supporting more than Elastic when Elastic provides everything we want.
Clean up all of the code related to anything other than persisting the data in Elastic. This should also cut down on the amount of command switches.
This task includes cleaning up the import-style switches like --dbimport, etc.
During the second run on a duplicate database, the function to import to a local DB creates new entries for each host under "AltHost" since it's seeing the same IP for a second time.
Ideally we shouldn't import a host that has the same hostname as an AltHost.
Rebuild of core system functions, some tasks include (but not limited to):
Remove as well as prevent from being recon'd in the first place.
The ability to feed the host through the enrichment flow and add custom tags/notes.
Applications for this include:
The tooling should be able to be split into modules to allow easier editing and integration:
Under each of these modules should be submodules that contain specific functionality, for example:
Core
Data Storage
Recon
Scanning
Enrichment
Reporting/Alerting
A plugin that can gather service information based on active ports. Will gather things like:
Currently, if you fail to add the right args to some arguments, no help messages are printed and it fails silently. Need to help the user figure out why no action was taken.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.