Coder Social home page Coder Social logo

video-streaming_component's Introduction

lit_video_stream component

Extract features from any video while streaming from any video web link.

  • Supports passing in arbitrary feature extractor models.
  • Enables custom stream processors.
  • Any accelerator (GPU/TPU/IPU) (single device).

Supported Feature extractors

  • any vision model from Open AI

Supported stream processors

  • YouTube
  • Any video from a URL

Install this component

lightning install component lightning/LAI-lit-video-streaming

Use the component

Here's an example of using this component in an app

import lightning as L
from lit_video_stream import LitVideoStream
from lit_video_stream.feature_extractors import OpenAIClip
from lit_video_stream.stream_processors import YouTubeStreamProcessor


class LitApp(L.LightningFlow):
    def __init__(self) -> None:
        super().__init__()
        self.lit_video_stream = LitVideoStream(
            feature_extractor=OpenAIClip(batch_size=256),
            stream_processor=YouTubeStreamProcessor(),
            process_every_n_frame=30,
            num_batch_frames=256,
        )

    def run(self):
        one_min = "https://www.youtube.com/watch?v=8SQL4knuDXU"
        self.lit_video_stream.download(video_urls=[one_min, one_min])
        if len(self.lit_video_stream.features) > 0:
            print("do something with the features")


app = L.LightningApp(LitApp())

Add a progress bar

To track the progress of processing, implement a class that overrides "update" and "reset"

CLI progress bar

from tqdm import tqdm


class TQDMProgressBar:
    def __init__(self) -> None:
        self._prog_bar = None

    def update(self, current_frame):
        self._prog_bar.update(1)

    def reset(self, total_frames):
        if self._prog_bar is not None:
            self._prog_bar.close()
        self._prog_bar = tqdm(total=total_frames)

For a web server

import requests


class StreamingProgressBar:
    def update(self, current_frame):
        r = requests.post("http://your/url", json={"current_frame": current_frame})

    def reset(self, total_frames):
        r = requests.post("http://your/url", json={"total_frames": total_frames})

and pass it in:

self.lit_video_stream = LitVideoStream(
    feature_extractor=OpenAIClip(batch_size=256),
    stream_processor=YouTubeStreamProcessor(),
    process_every_n_frame=30,
    num_batch_frames=256,
    prog_bar=TQDMProgressBar(),
)

Add your own feature extractor

To pass in your own feature extractor, simply implement a class that overrides "extract_features" For example, this feature extractor uses Open AI + PyTorch Lightning to accelerate feature extraction

import clip as openai_clip
import torch
import pytorch_lightning as pl


class LightningInferenceModel(pl.LightningModule):
    def __init__(self, model, preprocess) -> None:
        super().__init__()
        self.model = model
        self.preprocess = preprocess

    def predict_step(self, batch, batch_idx, dataloader_idx=0):
        batch_features = self.model.encode_image(batch)
        batch_features /= batch_features.norm(dim=-1, keepdim=True)

        return batch_features


class OpenAIClip:
    def __init__(
        self, model_type="ViT-B/32", batch_size=256, feature_dim=512, num_workers=1
    ):
        super().__init__()
        self.model_type = model_type
        self.batch_size = batch_size
        self.feature_dim = feature_dim
        self.num_workers = num_workers

        model, preprocess = openai_clip.load(model_type)
        self.predictor = LightningInferenceModel(model, preprocess)

        # PyTorch Lightning does not yet support distributed inference
        # when it does, use this one:    self.trainer = pl.Trainer(accelerator='auto')
        self.trainer = pl.Trainer(accelerator="auto", devices=1)

    def run(self, frames):
        # PIL images -> torch.Tensor
        batch = torch.stack([self.predictor.preprocess(frame) for frame in frames])

        # dataset
        batch_size = min(len(batch), self.batch_size)
        dl = torch.utils.data.DataLoader(
            batch, batch_size=batch_size, num_workers=self.num_workers
        )

        # ⚡ accelerated inference with PyTorch Lightning ⚡
        batch = self.trainer.predict(self.predictor, dataloaders=dl)

        # results
        batch = torch.cat(batch)
        return batch

Add a stream processor

Stream processors allow you to process videos more efficiently. To add your own, simply pass in an object that implements "run".

Here's an example that creates a stream processor for YouTube

from pytube import YouTube


class YouTubeStreamProcessor:
    def run(self, video_url):
        yt = YouTube(video_url)
        streams = yt.streams.filter(
            adaptive=True, subtype="mp4", resolution="360p", only_video=True
        )
        return streams[0].url

TODO:

[ ] Multi-node

video-streaming_component's People

Contributors

williamfalcon avatar kaushikb11 avatar borda avatar pre-commit-ci[bot] avatar

Stargazers

Mick Dekkers avatar Shitty Girl avatar

Watchers

Luca Antiga avatar John Paul Hennessy avatar  avatar Krishna Kalyan avatar  avatar Sebastian Raschka avatar  avatar Scott Kwait avatar Noha Alon avatar Ethan Harris avatar thomas chaton avatar Luca Furst avatar Rick Izzo avatar Kushashwa Ravi Shrimali avatar

video-streaming_component's Issues

Demo app does not work locally

$ lightning --version
lightning, version 0.0.51
$ lightning run app --open-ui=False ./demo_app.py
ERROR: Found an exception when loading your application from demo_app.py. Please, resolve it to run your app.

Traceback (most recent call last):
  File "demo_app.py", line 2, in <module>
    from lit_video_stream import LitVideoStream
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/lit_video_stream/__init__.py", line 1, in <module>
    from lit_video_stream.component import LitVideoStream
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/lit_video_stream/component.py", line 6, in <module>
    from lit_video_stream.feature_extractors.open_ai import OpenAIClip
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/lit_video_stream/feature_extractors/__init__.py", line 1, in <module>
    from lit_video_stream.feature_extractors.open_ai import OpenAIClip
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/lit_video_stream/feature_extractors/open_ai.py", line 3, in <module>
    import pytorch_lightning as pl
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/venv/lib/python3.10/site-packages/pytorch_lightning/__init__.py", line 30, in <module>
    from pytorch_lightning.callbacks import Callback  # noqa: E402
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/__init__.py", line 26, in <module>
    from pytorch_lightning.callbacks.pruning import ModelPruning
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/venv/lib/python3.10/site-packages/pytorch_lightning/callbacks/pruning.py", line 31, in <module>
    from pytorch_lightning.core.lightning import LightningModule
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/venv/lib/python3.10/site-packages/pytorch_lightning/core/__init__.py", line 16, in <module>
    from pytorch_lightning.core.lightning import LightningModule
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/venv/lib/python3.10/site-packages/pytorch_lightning/core/lightning.py", line 40, in <module>
    from pytorch_lightning.loggers import LightningLoggerBase, LoggerCollection
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/venv/lib/python3.10/site-packages/pytorch_lightning/loggers/__init__.py", line 18, in <module>
    from pytorch_lightning.loggers.tensorboard import TensorBoardLogger
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/venv/lib/python3.10/site-packages/pytorch_lightning/loggers/tensorboard.py", line 26, in <module>
    from torch.utils.tensorboard import SummaryWriter
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/venv/lib/python3.10/site-packages/torch/utils/tensorboard/__init__.py", line 10, in <module>
    from .writer import FileWriter, SummaryWriter  # noqa: F401
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/venv/lib/python3.10/site-packages/torch/utils/tensorboard/writer.py", line 9, in <module>
    from tensorboard.compat.proto.event_pb2 import SessionLog
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/venv/lib/python3.10/site-packages/tensorboard/compat/proto/event_pb2.py", line 17, in <module>
    from tensorboard.compat.proto import summary_pb2 as tensorboard_dot_compat_dot_proto_dot_summary__pb2
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/venv/lib/python3.10/site-packages/tensorboard/compat/proto/summary_pb2.py", line 17, in <module>
    from tensorboard.compat.proto import tensor_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__pb2
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/venv/lib/python3.10/site-packages/tensorboard/compat/proto/tensor_pb2.py", line 16, in <module>
    from tensorboard.compat.proto import resource_handle_pb2 as tensorboard_dot_compat_dot_proto_dot_resource__handle__pb2
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/venv/lib/python3.10/site-packages/tensorboard/compat/proto/resource_handle_pb2.py", line 16, in <module>
    from tensorboard.compat.proto import tensor_shape_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__shape__pb2
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/venv/lib/python3.10/site-packages/tensorboard/compat/proto/tensor_shape_pb2.py", line 36, in <module>
    _descriptor.FieldDescriptor(
  File "/home/alec/work/PyTorchLightning/lit_video_streaming/venv/lib/python3.10/site-packages/google/protobuf/descriptor.py", line 560, in __new__
    _message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.