Coder Social home page Coder Social logo

hashview / hashview Goto Github PK

View Code? Open in Web Editor NEW
335.0 17.0 40.0 51.47 MB

A web front-end for password cracking and analytics

Home Page: https://www.hashview.io

License: GNU General Public License v3.0

Python 57.90% CSS 0.12% HTML 41.88% Mako 0.10%
python hashcat web flask analytics distributed

hashview's Introduction

Hashview v0.8.1

Hashview is a tool for security professionals to help organize and automate the repetitious tasks related to password cracking. It is broken into two compoents, the Hashview Server, and Hashview Agent. The Hashview Server is a web application that manages one or more agents, deployed by you on dedicated hardware. (note you can run the server and agent on the same machine). Hashview strives to bring constiency in your hashcat tasks while delivering analytics with pretty pictures ready for ctrl+c, ctrl+v into your reports.

Note: If you are running version v0.8.0 and want to upgrade. All you need to do is git pull on main and start hashview.py, this should automatically upgrade your instance to the latest version.

Server Requirements

  1. Python 3.7+
  2. Mysql DB installed with known username/password
  3. Access to a SMTP email service (used for password resets and notifications)

Agent Requirements

  1. Python 3.7+
  2. Hashcat 6.2.x+

Installation

Follow these instructions to install Hashview Server on Ubuntu 20.04.3 LTS server. In theory Hashview should be able to run on any *nix system, but the dev's only installed/tested on Debian/Ubuntu.

1) Setup MySQL

sudo apt update
sudo apt install mysql-server
sudo service mysql start
sudo mysql_secure_installation

2) Configure MySQL

Log into your mysql server and create a dedicated user for hashview. Hashview can run as root, but doesnt need to. And since we practice what we preach. we should use a lower priv account for this. If you're installing hashview on a different server than the system where the mysql db is running on, adjust the account creation.

sudo mysql
CREATE USER 'hashview'@'localhost' IDENTIFIED BY 'DoNotUseThisPassword123!';
GRANT ALL PRIVILEGES ON hashview.* TO 'hashview'@'localhost';
FLUSH PRIVILEGES;
create database hashview;
exit

3) Install Hashview Server

The following are to install hashview after the mysql db has been setup.

sudo apt-get install python3 python3-pip python3-flask
git clone https://github.com/hashview/hashview
cd hashview
pip3 install -r requirements.txt
./setup.py
./hashview.py # (note you can add a --debug if you are attempting to troubleshoot an issue)

4) Log into your hashview server

Navigate to your server, default port is 8443. https://IP:8443

(note) Because hashview is installed with a self signed certificate, you will be prompted about it being invalid. You're welcome to use properly signed certs by replacing the files under hashview/hashview/control/ssl/

Once logged in, before you can start cracking hashes, you need to install a Hashview-Agent.

Installing Hashview-Agent

After you've installed hashview you will need to install a hashview-agent. The agent can run on the same system as hashview, but doesn't have to.

1) Log into hashview as an Administrator

2) Navigate to Agents Menu

3) Click Download Agent to get a .tgz package of the hashview-agent

4) Move agent to the system you'd like to run it on

5) Install Agent

You will need to decompress the package and run the hashview-agent.py script. Upon initial execution it will prompt you for information about your hashview server.

tar -xzvf hashview-agent.<version>.tgz
cd install/
cp -r hashview-agent ../
cd ../hashview-agent
pip3 install -r requirements.txt
python3 ./hashview-agent.py

6) Once running, you (or another admin) will need to navigate back into Hashview->Manage->agents and approve the agent.

Developing and Contributing

Please see the Contribution Guide for how to develop and contribute.
If you have any problems, please consult Issues page first. If you don't see a related issue, feel free to add one and we'll help.

Feature Requests

We accept Pull Requests :). But if you'd like a feature without submitting code, first check the issues section to see if someone has already requested it. If so, go ahead an upvote that request. Otherwise feel free to create your own new feature request. No promises it'll get implemented, but it cant hurt to ask.

Authors

Contact us on Twitter
@jarsnah12

hashview's People

Contributors

bandrel avatar hafiziahmad avatar i128 avatar luqhman avatar osean-man avatar staticn0thing avatar vsamiamv avatar yoshi325 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hashview's Issues

One and done

Option to abort all other tasks for a job after one account has been successfully recovered. Alternative could use a minimum value instead of one.

Users List -> Info should include associated information

As an admin, when navigating to Managed->Users and selecting the 'info' icon, a model is displayed that is empty. This should be populated with the users associated:

  • Jobs
  • wordlists
  • hash_notifications
  • job_notifications
  • Hashes
  • Rules
  • Tasks
  • Task Groups

Code should probably go in:
hashview/users/routes.py
hashview/templates/users.html

Hash notifications, select all

During the job creation process, once a user selects that they'd like to be notified for a cracked hash, give them the ability to 'select all'.

Probably needs to be java script and maybe the button should go up in the table header by the word 'Notify?'

Support Data Retention on Hashview-Agents

Hashview-Agent stores data in

hashview-agent/control/hashes/
hashview-agent/control/tmp/
hashview-agent/control/outfiles/

Data in these directories should follow hashview servers data retention policy. Initiation of the cleanup action should be started by the client. Probably inbetween checkins. Will need to submit a query to the server, server will need an API endpoint to accept a request for data retention policy and return a number value indicating how many days.

Probably can re-use the same clean up / calculation code used on server side data retention.

Adding Hash Types

Hello, can anyone point me in the right direction for where to add hash types that are not automatically supported?

Admin->settings->Database tab

We should have a new tab available for admins only. This tab should have the following features:

  1. Database export: essentially msqldump+gzip and delivered to the end user as a filedownload
  2. Database optimization: a way to run the following cmds OPTIMIZE TABLE hashes;OPTIMIZE TABLE hashfile_hashes;

Better error handling with hashcat/hashview-agent

As it stands, if hashcat errors out, say if there is a typo in a mask, the task its assigned will fail and then hashview will move onto the next step. Instead we should capture these errors.

https://github.com/hashview/hashview/blob/main/install/hashview-agent/hashview-agent.py#L125 is responsible for hashcat execution, will probably have to change that to something that captures returned error codes. But also need to confirm that hashcat returns error codes if there is a failure to execute, (or over heat, not enough memory, etc)

agent/api will need to be updated to handle a response of when the error occurs for the task.

will need to figure out what to do server side when this happens. Mark the task as failed on the main page? Do we notify the user/admin/task owner?

do we hang the job before completion? on the jobs page do we mark the job as complete or incomplete with errors?
do we track the errors outside of the task? like on the job, so that its viewable after the job finishes?

DB password can not contain % character

During initial setup, when users submit a password that contains a '%' app will error out on load. Might need to do some encoding or escaping to address.

Thanks for the feedback regarding werkeug.

Regarding the first error:

  1. did you fill out the hashview/database.conf file?
  2. if so, does your password for your database contain a ' character? or perhaps a / caracter?

Originally posted by @i128 in #55 (comment)

Dynamic Wordlist Based on Existing User Names

Hashview currently has one dynamic wordlist that is a culmination of a unique set of clear text passwords found in the DB. It would be nice to create a second dynamic wordlist that is a culmination of a unique set of usernames found in the hashfile_hashes.users table.

It would need to be created upon app installation.
hashview/utils/utils.py->update_dynamic_wordlist() will need to be updated.
hashview/models.py will need updating
migrations/ will need updating

but i think that should be it.

Users can run install w/o valid smtp settings

We should probably add a 'test email' under user profile and or require activation code to validate email is setup correctly during install.

yes, i found email and password, but after creating task and running, i had another error:

Agent:

Time.Started.....: Sat Apr 16 03:02:09 2022 (0 secs) Time.Estimated...: Sat Apr 16 03:02:09 2022 (0 secs) Kernel.Feature...: Optimized Kernel Guess.Base.......: File (control/wordlists/dynamic-all.txt) Guess.Mod........: Rules (control/rules/best64.rule) Guess.Queue......: 1/1 (100.00%) Speed.#1.........:        0 H/s (0.00ms) @ Accel:128 Loops:250 Thr:1024 Vec:1 Recovered........: 0/2419 (0.00%) Digests, 0/2419 (0.00%) Salts Remaining........: 2419 (100.00%) Digests, 2419 (100.00%) Salts Recovered/Time...: CUR:N/A,N/A,N/A AVG:N/A,N/A,N/A (Min,Hour,Day) Progress.........: 0 Rejected.........: 0 Restore.Point....: 0 Restore.Sub.#1...: Salt:0 Amplifier:0-0 Iteration:0-250 Candidate.Engine.: Device Generator Candidates.#1....: [Copying] Hardware.Mon.#1..: Temp: 41c Fan:  0% Util: 56% Core:1961MHz Mem:3802MHz Bus:16 Started: Sat Apr 16 03:02:07 2022 Stopped: Sat Apr 16 03:02:11 2022 [*] No Results. Skipping upload. [*] Done working [*] No Results. Skipping upload. [!] HTTP POST (response): Got an unexpected return code:500 Traceback (most recent call last): File "./hashview-agent.py", line 467, in <module> updateJobTaskResponse = updateJobTask(job_task['id'], 'Completed') File "./hashview-agent.py", line 361, in updateJobTask return api.updateJobTask(job_task_id, task_status) File "/home/shinobi/hashview_agent/agent/api/api.py", line 122, in updateJobTask decoded_response = json.loads(response) File "/usr/lib/python3.8/json/__init__.py", line 341, in loads raise TypeError(f'the JSON object must be str, bytes or bytearray, ' TypeError: the JSON object must be str, bytes or bytearray, not NoneType root@home-pc:/home/shinobi/hashview_agent#  

Server:

root@ubuntu:~/hashview# ./hashview.py /usr/local/lib/python3.8/dist-packages/flask_sqlalchemy/__init__.py:872: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future.  Set it to True or False to suppress this warning. warnings.warn(FSADeprecationWarning( Done! Running Hashview! Enjoy. * Serving Flask app "hashview" (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off [DEBUG] Im retaining all the data: 2022-04-16 07:00:00.000944 [DEBUG] hashview.py->data_retention_cleanup() ff46ac1d564d1813 [DEBUG] hashview.py->data_retention_cleanup() hashview-agent.0.8.0.tgz [DEBUG] hashview.py->data_retention_cleanup() .gitignore Found Git Ignore! [DEBUG] ============== /root/hashview/hashview/utils/utils.py:193: SAWarning: fully NULL primary key identity cannot load any object.  This condition may raise an error in a future release. rules_file = Rules.query.get(task.rule_id) /root/hashview/hashview/utils/utils.py:242: SAWarning: fully NULL primary key identity cannot load any object.  This condition may raise an error in a future release. agent = Agents.query.get(jobtask.agent_id) [2022-04-16 07:02:22,314] ERROR in app: Exception on /v1/jobtask/status [POST] Traceback (most recent call last): File "/usr/lib/python3/dist-packages/flask/app.py", line 2446, in wsgi_app response = self.full_dispatch_request() File "/usr/lib/python3/dist-packages/flask/app.py", line 1951, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/lib/python3/dist-packages/flask/app.py", line 1820, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/lib/python3/dist-packages/flask/_compat.py", line 39, in reraise raise value File "/usr/lib/python3/dist-packages/flask/app.py", line 1949, in full_dispatch_request rv = self.dispatch_request() File "/usr/lib/python3/dist-packages/flask/app.py", line 1935, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/root/hashview/hashview/api/routes.py", line 427, in v1_api_set_queue_jobtask_status if (update_job_task_status(jobtask_id = status_json['job_task_id'], status = status_json['task_status'])): File "/root/hashview/hashview/utils/utils.py", line 294, in update_job_task_status send_email(user, 'Hashview: Missing Pushover Key', 'Hello, you were due to recieve a pushover notification, but because your account was not provisioned with an pushover ID and Key, one could not be set. Please log into hashview and set these options under Manage->Profile.') File "/root/hashview/hashview/utils/utils.py", line 47, in send_email mail.send(msg) File "/usr/local/lib/python3.8/dist-packages/flask_mail.py", line 491, in send with self.connect() as connection: File "/usr/local/lib/python3.8/dist-packages/flask_mail.py", line 144, in __enter__ self.host = self.configure_host() File "/usr/local/lib/python3.8/dist-packages/flask_mail.py", line 158, in configure_host host = smtplib.SMTP(self.mail.server, self.mail.port) File "/usr/lib/python3.8/smtplib.py", line 255, in __init__ (code, msg) = self.connect(host, port) File "/usr/lib/python3.8/smtplib.py", line 339, in connect self.sock = self._get_socket(host, port, self.timeout) File "/usr/lib/python3.8/smtplib.py", line 310, in _get_socket return socket.create_connection((host, port), timeout, File "/usr/lib/python3.8/socket.py", line 808, in create_connection raise err File "/usr/lib/python3.8/socket.py", line 796, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused  

Task stopped with succesfully status, but in reality bruteforce not running! task duration: ~1 sec.

So for this issue here. It looks like the mail server can not be reached by the hashview server. As a result the operation to send a notification to you that the job completed failed to send. And that error message was not liked by the agent. I think two different issues can be drawn from this 1) a button to test email 2) better error handling on the agent

Originally posted by @i128 in #55 (comment)

Users are able to edit jobs when they are running

While this wont affect the currently running job, users should still not be able to edit a job when its running.

To Fix:
Disable button on job list
place check in /jobs/int:job_id/assigned_hashfile/ route to confirm if job is running or not and throw back to joblist with erro

Incomplete hashfile db entry when user attempts to upload invalid file

Examples can include user submitting a hashfile type of $user:$hash and then submit a kerberos hash. Ultimatly this results in a hashfile table entry but no entry in hashfile hashes. The user will be presented with an error and when an admin/user navigates to the hashfiles pane, it'll error out since there are not entries in hashfile_hashes.

Automatic Job Type of Top Masks

New job type of top hashes that takes the founds processes them and then takes the top n number of masks. I believe you already calculate them. The job can also be limited to a specific time.

User search is case sensitive, should be case insensitive

specific issue with with hashview/hashview/searches/routes.py

if searchForm.validate_on_submit():
        if searchForm.search_type.data == 'hash':
            results = db.session.query(Hashes, HashfileHashes).join(HashfileHashes, Hashes.id==HashfileHashes.hash_id).filter(Hashes.ciphertext==searchForm.query.data).all()
        elif searchForm.search_type.data == 'user':
            results = db.session.query(Hashes, HashfileHashes).join(HashfileHashes, Hashes.id==HashfileHashes.hash_id).filter(HashfileHashes.username.like('%' + searchForm.query.data.encode('latin-1').hex() + '%')).all()
        elif searchForm.search_type.data == 'password':
            results = db.session.query(Hashes, HashfileHashes).join(HashfileHashes, Hashes.id==HashfileHashes.hash_id).filter(Hashes.plaintext == searchForm.query.data.encode('latin-1').hex()).all()

This is because we hex encode the string prior to doing a search. Unfortunately to really fix this we'd either need to hex decode every entry in hashfile_hashes.username and then compare.

another option might be to force lowercase usernames on import and then force lower case searching 🤔

Failure to properly redirect user when attempting to delete a wordlist associated with a task

@wordlists.route("/wordlists/delete/<int:wordlist_id>", methods=['POST'])
@login_required
def wordlists_delete(wordlist_id):
    wordlist = Wordlists.query.get(wordlist_id)
    if current_user.admin or wordlist.owner_id == current_user.id:

        # prevent deltion of dynamic list
        if wordlist.type == 'dynamic': 
            flash('Dynamic Wordlists can not be deleted.', 'danger')
            redirect(url_for('wordlists.wordlists_list'))

        # Check if associated with a Task 
        tasks = Tasks.query.all()
        for task in tasks:
            if task.wl_id == wordlist_id:
                flash('Failed. Wordlist is associated to one or more tasks', 'danger')
                return(url_for('wordlists.wordlists_list'))

        db.session.delete(wordlist)
        db.session.commit()
        flash('Wordlist has been deleted!', 'success')
    else:
        flash('Unauthorized Action!', 'danger')
    return redirect(url_for('wordlists.wordlists_list'))

Instead of returning the url_for, we need to redirect.

Issue installing agent

Hello all, this may be a silly question with a simple answer, however, during the installation process I have successfully installed the client but a ConnectionResetError is being thrown when I enter the path to a local hashcat install.

The error code is as follows:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 665, in urlopen
httplib_response = self._make_request(
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 421, in _make_request
six.raise_from(e, None)
File "", line 3, in raise_from
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 416, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse
response.begin()
File "/usr/lib/python3.8/http/client.py", line 316, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.8/http/client.py", line 277, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.8/socket.py", line 669, in readinto
return self._sock.recv_into(b)
ConnectionResetError: [Errno 104] Connection reset by peer

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/crypto/.local/lib/python3.8/site-packages/requests/adapters.py", line 440, in send
resp = conn.urlopen(
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 719, in urlopen
retries = retries.increment(
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 400, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/lib/python3/dist-packages/six.py", line 702, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 665, in urlopen
httplib_response = self._make_request(
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 421, in _make_request
six.raise_from(e, None)
File "", line 3, in raise_from
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 416, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse
response.begin()
File "/usr/lib/python3.8/http/client.py", line 316, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.8/http/client.py", line 277, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.8/socket.py", line 669, in readinto
return self._sock.recv_into(b)
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "./hashview-agent.py", line 384, in
response = send_heartbeat(agent_status, '')
File "./hashview-agent.py", line 71, in send_heartbeat
return api.heartbeat(agent_status, hc_status)
File "/home/crypto/Downloads/hashview-agent/agent/api/api.py", line 12, in heartbeat
response = http.post('/v1/agents/heartbeat', json.loads(json.dumps(message)))
File "/home/crypto/Downloads/hashview-agent/agent/http/http.py", line 70, in post
response = http.post(path, data=json.dumps(data), verify=False, cookies=cookie, headers=headers)
File "/home/crypto/.local/lib/python3.8/site-packages/requests/sessions.py", line 577, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/home/crypto/.local/lib/python3.8/site-packages/requests/sessions.py", line 529, in request
resp = self.send(prep, **send_kwargs)
File "/home/crypto/.local/lib/python3.8/site-packages/requests/sessions.py", line 645, in send
r = adapter.send(request, **kwargs)
File "/home/crypto/.local/lib/python3.8/site-packages/requests/adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

Any stupid mistakes I could've made or proposed fixes?

Running Jobs on homepage do not show queued ordered

If a user creates and starts job a), then a second user creates and starts job b) they will show up in order on the page. However, if/when job a completes, is edited and started again while job b is running, it'll show above job b on the main page.

Add ability for Users to Edit Rules in Browser

Under Manage->Rules, for each entry, under the control column, there should be a new button that, when selected brings the users to a new page, where they can edit the contents of a rules file and then save back to the server.

Support Upgrade from 0.8.0 -> 0.8.1

  • Initial considerations
  • Agent/server checks
  • initial startup of server
    • probably need to save the version of the app in the db (like last time)

Admin->users->info model needs info

As an admin user, selecting Manage->User Accounts, then selecting the [i] icon to show info currently displays nothing.

Instead it should include the following information:

  • All associated jobs
  • All associated tasks
  • All associated rules
  • All associated wordlists

Code should probably go in:
hashview/users/routes.py
hashview/templates/users.html

Set max runtime per task

Create a new setting under admin->settings for max runtime for tasks. 0 = unused, otherwise set for 24 hours.
Then on summary page. Inform user of max runtime value per task and for overall job.
Finally on agent heartbeat, check start time vs max run time for the running task, and if time exceeded, send cancel message to client.

Questions yet to be answered.
Do we prompt user in wizard for this value?
Do we have a max runtime for jobs?
Do we send a notification on jobs that exceeded runtime?

Unable to upload Wordlist

Hello.
when i try to upload a wordlist :

  • add name (ex : frenchpw)
  • select password file (txt format, 171 kb file)
  • clicking upload button

=> name field turn red / empty with "This field is required", file name emptied too.

Tested on a debian 11 machine, hashview running with --debug and raised no errors.
symptoms occured wuth a biggest file too (crackstation txt file)

Thanks for your help.

Prompt for Notifications and Hash Selection should come before task selection

Right now, during the New Job Creation process, the user is prompted to select what tasks if any they'd like to include with the job, THEN they are prompted if they want to be notified upon a job creation / hash recovery. These steps should be reversed where the prompt for notifications/hash selection should come first before the task selection.

Users List should show last login date

A new entry in the models.py for the users table should include the last date/time that they logged in.
Then with that information, it should be displayed on the Manage->users list for administrators to review and cleanup for any stale users.

Ability to import potfiles

New feature idea ability to import potfiles without manually splitting the files and uploading the clears as a dictionary and cracking them through hashview.

Record and display instacrack rate in analytics page

With insta crack rate being displayed to the end user, we should record this value in the hashfile table and then display it to the end user when they view the analytics.

for multiple hashfiles for the same customer we can sum this value.

Unique NTLMv1/NTLMv2 hashes false positive

When submitting two NTLMv1/v2 hashes where the usernames are identical but the computer names are different, hashview will throw an error claiming the hashes are the same and only one should be submitted.

Solution would be to detect if the account is a local account or a domain account and run unique checks depending on the result.

Add new graph to Analytics Page: Recovered Hashes

Right now Hashview has a graph to show Recovered Accounts, where the values are a unique set of user:hash.
We should add a new doughnut graph that looks similar Recovered Accounts, but be labeled Recovered Hashes. This would be a unique set of hashes for the hashfile, customer, all customers.
It should be placed below Recovered Accounts.

receive error when run ./hashview.py

Hi, i do exactly from manual, how i can fix this problem?

OS: ubuntu 20.04

root@ubuntu:~/hashview# ./hashview.py
Traceback (most recent call last):
  File "./hashview.py", line 5, in <module>
    from hashview import create_app
  File "/root/hashview/hashview/__init__.py", line 6, in <module>
    from hashview.config import Config
  File "/root/hashview/hashview/config.py", line 5, in <module>
    class Config:
  File "/root/hashview/hashview/config.py", line 8, in Config
    SQLALCHEMY_DATABASE_URI = 'mysql+mysqlconnector://' + file_config['database']['username'] + ':' + file_config['database']['password'] + '@' + file_config['database']['host'] + '/hashview'
  File "/usr/lib/python3.8/configparser.py", line 1255, in __getitem__
    return self._parser.get(self._name, key)
  File "/usr/lib/python3.8/configparser.py", line 799, in get
    return self._interpolation.before_get(self, section, option, value,
  File "/usr/lib/python3.8/configparser.py", line 395, in before_get
    self._interpolate_some(parser, option, L, value, section, defaults, 1)
  File "/usr/lib/python3.8/configparser.py", line 442, in _interpolate_some
    raise InterpolationSyntaxError(
configparser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: '%'

Removing of hashfile is slow

https://github.com/hashview/hashview/blob/main/hashview/hashfiles/routes.py#L29

This approach takes for ever and can be made more efficient. Instead of iterating through hashfile_hashes and checking for duplicates / notifications we could instead:

delete from hashfile_hashes where hashfile_id ='x';
delete from hashfiles where id = 'x';
delete from hashes where id NOT IN (SELECT hash_id from hashfile_hashes) and cracked = '0';
delete from notifications where hash_id NOT IN (SELECT hash_id from hashfile_hashes) and cracked = '0';

Job completion notification should include a link to the hashfile analytics page

under hashview/utils/utils.py under update_job_task_status, when a job has completed, we check to see if there is an entry in the jobs notification table, and if there is we send an email to the job owner that their job has been completed. This email should include a link to the analytics page where that hashfile is presented.

also the time listed in the email should be changed from seconds to be ,hours, minutes, etc, .

Search API

The ability to perform Hash, Username, password searches via API

Need to figure out if we want to require user auth for this or not

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.