Coder Social home page Coder Social logo

opennsfw2's People

Contributors

bhky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opennsfw2's Issues

OpenNSFW in PyTorch

Hey!
Thanks for great work!
Your current work is using tensorflow and the highest supported CUDA driver version for TF gpu usage is 11.8
I am running CUDA 12.2, and cannot get to use ur code with gpu, as tf does not supports gpu for cuda above 11.8
Do u have any valuable resoruce to convert/ckpts this work in pytorch?

Update to 0.11.0 causing exceptions

Our low level implementation is causing an error once we update to the 0.11.0 release.

Code

def predict_frame(target_frame : Frame) -> bool:
	image = Image.fromarray(target_frame)
	image = opennsfw2.preprocess_image(image, opennsfw2.Preprocessing.YAHOO)
	views = numpy.expand_dims(image, axis = 0)
	_, probability = get_predictor().predict(views)[0]
	return probability > MAX_PROBABILITY

Source: https://github.com/facefusion/facefusion/blob/master/facefusion/predictor.py

Error

Traceback (most recent call last):
  File "/home/henry/PycharmProjects/facefusion/venv/lib/python3.10/site-packages/gradio/routes.py", line 523, in run_predict
    output = await app.get_blocks().process_api(
  File "/home/henry/PycharmProjects/facefusion/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1437, in process_api
    result = await self.call_function(
  File "/home/henry/PycharmProjects/facefusion/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1109, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/home/henry/PycharmProjects/facefusion/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/home/henry/PycharmProjects/facefusion/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/home/henry/PycharmProjects/facefusion/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/home/henry/PycharmProjects/facefusion/venv/lib/python3.10/site-packages/gradio/utils.py", line 865, in wrapper
    response = f(*args, **kwargs)
  File "/home/henry/PycharmProjects/facefusion/facefusion/uis/components/preview.py", line 95, in update_preview_image
    preview_frame = process_preview_frame(target_frame)
  File "/home/henry/PycharmProjects/facefusion/facefusion/uis/components/preview.py", line 118, in process_preview_frame
    if predict_frame(temp_frame):
  File "/home/henry/PycharmProjects/facefusion/facefusion/predictor.py", line 33, in predict_frame
    _, probability = get_predictor().predict(views)[0]
  File "/home/henry/PycharmProjects/facefusion/venv/lib/python3.10/site-packages/keras_core/src/utils/traceback_utils.py", line 123, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/home/henry/PycharmProjects/facefusion/venv/lib/python3.10/site-packages/tensorflow/python/eager/execute.py", line 53, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InternalError: Graph execution error:

Detected at node 'StatefulPartitionedCall' defined at (most recent call last):
    File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
      self._bootstrap_inner()
    File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
      self.run()
    File "/home/henry/PycharmProjects/facefusion/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
      result = context.run(func, *args)
    File "/home/henry/PycharmProjects/facefusion/venv/lib/python3.10/site-packages/gradio/utils.py", line 865, in wrapper
      response = f(*args, **kwargs)
    File "/home/henry/PycharmProjects/facefusion/facefusion/uis/components/preview.py", line 95, in update_preview_image
      preview_frame = process_preview_frame(target_frame)
    File "/home/henry/PycharmProjects/facefusion/facefusion/uis/components/preview.py", line 118, in process_preview_frame
      if predict_frame(temp_frame):
    File "/home/henry/PycharmProjects/facefusion/facefusion/predictor.py", line 33, in predict_frame
      _, probability = get_predictor().predict(views)[0]
    File "/home/henry/PycharmProjects/facefusion/venv/lib/python3.10/site-packages/keras_core/src/utils/traceback_utils.py", line 118, in error_handler
      return fn(*args, **kwargs)
    File "/home/henry/PycharmProjects/facefusion/venv/lib/python3.10/site-packages/keras_core/src/backend/tensorflow/trainer.py", line 504, in predict
      batch_outputs = self.predict_function(data)
    File "/home/henry/PycharmProjects/facefusion/venv/lib/python3.10/site-packages/keras_core/src/backend/tensorflow/trainer.py", line 210, in one_step_on_data_distributed
      outputs = self.distribute_strategy.run(
Node: 'StatefulPartitionedCall'
libdevice not found at ./libdevice.10.bc
         [[{{node StatefulPartitionedCall}}]] [Op:__inference_one_step_on_data_distributed_4826]

Replace PIL with CV2

At this time I might "piss you off" by asking to replace Pillow by OpenCV and Numpy? Most AI projects have this dependencies already so it makes sense to me - in our case I have the PIL dependency just for one line:

image = Image.fromarray(target_frame)

It's fine you close that issue instantly. Please consider it.

Support for Mac M1

This awesome lib depends on tensorflow and therefore cannot be installed on Mac with M1 chip as this needs tensorflow-macos. We are using opennsfw2 for roop and would love to give support to our users.

ERROR WITH NO ERROR

Hi,
I don't understand what happened with opennsfw2 code.
My installation is OK. I install Keras and Tensorflow 2.0 with CUDA but nothing,
Any idea ?
I attached a screenshot.
Thank you to help me
0008_2022-09-10_17_heures_18

sample output

hello,
thanks for ur great work
can we get an example output of video processing ? like is it able to detect hentai or anime nudity too ?

Range for video prediction

Is there a chance you implement a frame range for video prediction like.

predict_video_frames(video_path, (100, 200))

Many people in my community gonna appreciate that.

Regards

Which NSFW Area is this AI covering?

Hi,

very cool project, I am looking for an AI, which can cover on the one side nudity, but doesn't judge sexy images and also bans traumatic images, like horror and the crazy things, like NSFW 4 things, is it possible with this AI?

nsfw-chart

I found this image online, which is your AI covering?

Thanks!

Improved Memory Management for Model Reuse

Hello, and first of all, thank you for this fantastic project; I truly appreciate it.

I've encountered what may be a minor issue when using predict_images in a for-loop, which results in an Out Of Memory (OOM) error on my 4GB RAM machine. After investigating the code, I found the following line:

model = make_open_nsfw_model(weights_path=weights_path)

It appears that the model is created anew with every call to predict_images, which could lead to memory leaks, as suggested by my testing. To address this, I attempted to move the model creation outside the function like this:

global_model = make_open_nsfw_model(weights_path=get_default_weights_path())

Then, I utilized the global model within the function, as shown below:

predictions = global_model.predict(images, batch_size=batch_size, verbose=0)

This change seems to have resolved the memory issue on my end.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.