Coder Social home page Coder Social logo

deep-learning-with-pytorch-lightning's Introduction

Packt Conference

3 Days, 20+ AI Experts, 25+ Workshops and Power Talks

Code: USD75OFF

Deep Learning with PyTorch Lightning

Deep Learning with PyTorch Lightning

This is the code repository for Deep Learning with PyTorch Lightning, published by Packt.

Swiftly build high-performance Artificial Intelligence (AI) models using Python

What is this book about?

PyTorch Lightning lets researchers build their own Deep Learning (DL) models without having to worry about the boilerplate. With the help of this book, you'll be able to maximize productivity for DL projects while ensuring full flexibility from model formulation through to implementation. You'll take a hands-on approach to implementing PyTorch Lightning models to get up to speed in no time.

This book covers the following exciting features:

  • Customize models that are built for different datasets, model architectures, and optimizers
  • Understand how a variety of deep learning models from image recognition and time series to GANs, Semi-supervised and Self-supervised models can be built
  • Use out-of-the-box model architectures and pre-trained models using transfer learning
  • Run and tune DL models in a multi-GPU environment using mixed-mode precisions
  • Explore techniques for model scoring on massive workloads

If you feel this book is for you, get your copy today!

https://www.packtpub.com/

Instructions and Navigations

Most of the code is specific to the aforesaid PyTorch Lightning and Torch versions. Please ensure compatability by installing correct packages as defined in the technical requirements secction of the book.

All of the code is organized into folders. For example, Chapter02.

The code will look like the following:

import pytorch_lightning as pl
...
# use only 10% of the training data for each epoch
trainer = pl.Trainer(limit_train_batches=0.1)
# use only 10 batches per epoch
trainer = pl.Trainer(limit_train_batches=10)

Following is what you need for this book: This Deep Learning book is for citizen data scientists and expert data scientists transitioning from other frameworks to PyTorch Lightning. This book will also be useful for deep learning researchers who are just getting started with coding for deep learning models using PyTorch Lightning. Working knowledge of Python programming and an intermediate-level understanding of statistics and deep learning fundamentals is expected.

With the following software and hardware list you can run all code files present in the book (Chapter 1-10).

Software and Hardware List

Chapter Software required OS required
1 - 10 PyTorch Lightning Cloud, Anaconda (Mac, Windows)
1 - 10 Torch Cloud, Anaconda (Mac, Windows)
1 - 10 TensorBoard Cloud, Anaconda (Mac, Windows)

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. Click here to download it.

Related products

Get to Know the Author

Kunal Sawarkar is a Chief Data Scientist and AI thought leader. He leads the worldwide partner ecosystem in building innovative AI products. He also serves as an Advisory Board Member and an Angel Investor. He holds a master’s degree from Harvard University with major coursework in applied statistics. He has been applying machine learning to solve previously unsolved problems in industry and society, with a special focus on Deep Learning. Kunal has led various AI product R&D labs and has 20+ patents and papers published in this field. When not diving into data, he enjoys doing rock climbing and learning to fly aircraft while continuing his insatiable curiosity for astronomy and wildlife.

Download a free PDF

If you have already purchased a print or Kindle version of this book, you can get a DRM-free PDF version at no cost.
Simply click on the link to claim your free PDF.

https://packt.link/free-ebook/9781800561618

deep-learning-with-pytorch-lightning's People

Contributors

amitpj avatar biharicoder avatar dheeraj-arremsetty avatar kunal-savvy avatar mohammedyusufimaratwale avatar packt-itservice avatar packtutkarshr avatar sonam-packt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep-learning-with-pytorch-lightning's Issues

book

Hi, I have recently purchased your book for 3300/- along with shipping cost. There are a lot of code mistakes such as indentation and trying to reproduce code is a hassle. Very disappointed.

Chapter 2: Perceptron_model_XOR.ipynb

When reproducing the same code as is from repository. I am facing the following issue.

ValueError: too many values to unpack (expected 2)

torch version: 1.10.1
pytorch ligthening version: 1.5.6

Author Verdict

@kunal_savvy I have enclosed screenshots in #4

Looking forward to hearing from you regarding the verdict.

accuracy() error

Hi, I got this error accuracy() missing 1 required positional argument: 'task' when training the model. How to fix this error?

image

Chapter 2 ipynb not found

Dear, I'm trying to use the github of the book. In chapter 2 there is explained a simple model with the XOR and the CNN approach for the image classification on the dataset cat and dog, but in this repository, I found in Chapter 2 the ipynb relative to the Cancer detection.

Where I can find the lost examples?

Deprecation of installation of lightning-bolts

The installation of lightning bolts need to be done as follows:

pip install lightning-bolts --quiet

This one is deprecated, and raise errors:

!pip install pytorch-lightning-bolts --quiet

self.metrics moved to torchmetrics

self.metrics is giving error, and now we have to install torchmetrics:

pip install torchmetrics

and then specifically import metric like:

accur=torchmetrics.Accuracy()
accuracy=accur(outputs,labels)

please update your code... nothin works

for chapter 3

"""
IMPORTANT NOTE
Any input text data that is less than the max_seq_len value will be padded,
and anything bigger will be trimmed down.
"""
class HealthClaimClassifier(pl.LightningModule):

    def __init__(self, max_seq_len=512, batch_size=128, learning_rate = 0.001):
        super().__init__()
        self.learning_rate = learning_rate
        self.max_seq_len = max_seq_len
        self.batch_size = batch_size
        self.loss = nn.CrossEntropyLoss()

        self.pretrain_model  = AutoModel.from_pretrained('bert-base-uncased')
        self.pretrain_model.eval()
        for param in self.pretrain_model.parameters():
            param.requires_grad = False


        self.new_layers = nn.Sequential(
            nn.Linear(768, 512),
            nn.ReLU(),
            nn.Dropout(0.2),
            nn.Linear(512,4),
            nn.LogSoftmax(dim=1)
        )

    def prepare_data(self):
      tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')

      tokens_train = tokenizer.batch_encode_plus(
          pub_health_train["main_text"].tolist(),
          max_length = self.max_seq_len,
          pad_to_max_length=True,
          truncation=True,
          return_token_type_ids=False
      )

      tokens_test = tokenizer.batch_encode_plus(
          pub_health_test["main_text"].tolist(),
          max_length = self.max_seq_len,
          pad_to_max_length=True,
          truncation=True,
          return_token_type_ids=False
      )
      
      '''
      Now we need to create features and extract the target variable from the dataset.
      '''
      self.train_seq = torch.tensor(tokens_train['input_ids'])
      self.train_mask = torch.tensor(tokens_train['attention_mask'])
      self.train_y = torch.tensor(pub_health_train["label"].tolist())

      self.test_seq = torch.tensor(tokens_test['input_ids'])
      self.test_mask = torch.tensor(tokens_test['attention_mask'])
      self.test_y = torch.tensor(pub_health_test["label"].tolist())

    def forward(self, encode_id, mask):
        _, output= self.pretrain_model(encode_id, attention_mask=mask,return_dict=False)
        output = self.new_layers(output)
        return output

    def train_dataloader(self):
      train_dataset = TensorDataset(self.train_seq, self.train_mask, self.train_y)
      self.train_dataloader_obj = DataLoader(train_dataset, batch_size=self.batch_size)
      return self.train_dataloader_obj


    def test_dataloader(self):
      test_dataset = TensorDataset(self.test_seq, self.test_mask, self.test_y)
      self.test_dataloader_obj = DataLoader(test_dataset, batch_size=self.batch_size)
      return self.test_dataloader_obj

    def training_step(self, batch, batch_idx):
      encode_id, mask, targets = batch
      outputs = self(encode_id, mask) 
      preds = torch.argmax(outputs, dim=1)
      train_accuracy = accuracy(preds, targets)
      loss = self.loss(outputs, targets)
      self.log('train_accuracy', train_accuracy, prog_bar=True, on_step=False, on_epoch=True)
      self.log('train_loss', loss, on_step=False, on_epoch=True)
      return {"loss":loss, 'train_accuracy': train_accuracy}

    def test_step(self, batch, batch_idx):
      encode_id, mask, targets = batch
      outputs = self.forward(encode_id, mask)
      preds = torch.argmax(outputs, dim=1)
      test_accuracy = accuracy(preds, targets,task="multiclass")
      loss = self.loss(outputs, targets)
      return {"test_loss":loss, "test_accuracy":test_accuracy}

    def test_epoch_end(self, outputs):
      test_outs = []
      for test_out in outputs:
          out = test_out['test_accuracy']
          test_outs.append(out)
      total_test_accuracy = torch.stack(test_outs).mean()
      self.log('total_test_accuracy', total_test_accuracy, on_step=False, on_epoch=True)
      return total_test_accuracy

    def configure_optimizers(self):
      params = self.parameters()
      optimizer = optim.Adam(params=params, lr = self.learning_rate)
      return optimizer

i got error:

TypeError                                 Traceback (most recent call last)
[<ipython-input-24-0de98dbbb444>](https://localhost:8080/#) in <cell line: 4>()
      2 
      3 trainer = pl.Trainer(fast_dev_run=True, devices=1, accelerator="gpu")
----> 4 trainer.fit(model)

25 frames
[<ipython-input-23-ce55fa773c61>](https://localhost:8080/#) in training_step(self, batch, batch_idx)
     77       outputs = self(encode_id, mask)
     78       preds = torch.argmax(outputs, dim=1)
---> 79       train_accuracy = accuracy(preds, targets)
     80       loss = self.loss(outputs, targets)
     81       self.log('train_accuracy', train_accuracy, prog_bar=True, on_step=False, on_epoch=True)

TypeError: accuracy() missing 1 required positional argument: 'task'

i have already fizxed:
forward functio using:

#https://github.com/prateekjoshi565/Fine-Tuning-BERT/issues/10

Note to all readers about revised edition

Dear Reader

Thanks a lot for showing your interest and trust in the book. The revised edition of the book is ready and it is more than just a code upgrade to PL 1.5.x

  • It has all new revised use cases with larger, more complex real-world datasets to help you learn better
  • A new chapter is added on Lightning Flash for SOTA models
  • New content with examples on video classification, automatic speech recognition, Transfer learning on ResNet-50 as well as BERT NLP
  • A revamped structure to easily ensure packaged compatibility during the installation process
  • A section on the next steps to advance your learning in each chapter

This revised edition is different from the previous pre-release copies which some readers may have got before April. The previous edition was on PyTorch Lightning version 1.1.x or 1.0.x. However as we neared the launch, it became clear that with the PyTorch Lightning 1.5.x release, a lot of framework has undergone functional changes including callbacks & key capabilities. While the code could have worked in isolation by installing correct packages; in order to have better experience for the readers, we have decided to delay the release and upgrade all the content to 1.5.x and release a revised edition. This is to ensure that all the latest features of new framework releases are correctly captured which makes applications in Deep Learning even easier.

To this effect we have changed the release date for the book in the USA on Amazon. The book was not released in the USA during this time.However, somehow that change was not reflected in some countries and couple of pre-orders were released for the previous version of the book.

In case any has got the copy of the book prior to the April, Me and Publisher would be happy to send you a free revised copy as a replacement for your previous order. Doing so directly via Amazon turned out to be a challenge since Amazon does not share customer data. If you can forward your order info along with your contact details to the publisher email id at Gebin George; then we would be glad to send the copy of revised edition.

As always we appreciate your feedback, so please do share if you come across any new issues. You can always reach me at Kunal

Thanks & Regards,
Kunal S
[Along with Shivam Solanki and Amit]

about cGAN

in this book chap 6,The dcgan,I try to change it to cDGCAN that can generate different food image
the code is wrong,do you have the cDCGAN code?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.