py-torchsummary
Model summary in PyTorch similar to `model.summary()` in Keras
Science Score: 23.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
○.zenodo.json file
-
○DOI references
-
○Academic publication links
-
✓Committers with academic emails
1 of 10 committers (10.0%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (7.4%) to scientific vocabulary
Keywords
Repository
Model summary in PyTorch similar to `model.summary()` in Keras
Basic Info
- Host: GitHub
- Owner: sksq96
- License: mit
- Language: Python
- Default Branch: master
- Size: 43 KB
Statistics
- Stars: 4,043
- Watchers: 35
- Forks: 415
- Open Issues: 139
- Releases: 0
Topics
Metadata Files
README.md
Use the new and updated torchinfo.
Keras style model.summary() in PyTorch
Keras has a neat API to view the visualization of the model which is very helpful while debugging your network. Here is a barebone code to try and mimic the same in PyTorch. The aim is to provide information complementary to, what is not provided by print(your_model) in PyTorch.
Usage
pip install torchsummaryorgit clone https://github.com/sksq96/pytorch-summary
python
from torchsummary import summary
summary(your_model, input_size=(channels, H, W))
- Note that the
input_sizeis required to make a forward pass through the network.
Examples
CNN for MNIST
```python import torch import torch.nn as nn import torch.nn.functional as F from torchsummary import summary
class Net(nn.Module): def init(self): super(Net, self).init() self.conv1 = nn.Conv2d(1, 10, kernelsize=5) self.conv2 = nn.Conv2d(10, 20, kernelsize=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # PyTorch v0.4.0 model = Net().to(device)
summary(model, (1, 28, 28)) ```
```
Layer (type) Output Shape Param #
================================================================ Conv2d-1 [-1, 10, 24, 24] 260 Conv2d-2 [-1, 20, 8, 8] 5,020 Dropout2d-3 [-1, 20, 8, 8] 0 Linear-4 [-1, 50] 16,050
Linear-5 [-1, 10] 510
Total params: 21,840 Trainable params: 21,840
Non-trainable params: 0
Input size (MB): 0.00 Forward/backward pass size (MB): 0.06 Params size (MB): 0.08
Estimated Total Size (MB): 0.15
```
VGG16
```python import torch from torchvision import models from torchsummary import summary
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') vgg = models.vgg16().to(device)
summary(vgg, (3, 224, 224)) ```
```
Layer (type) Output Shape Param #
================================================================ Conv2d-1 [-1, 64, 224, 224] 1,792 ReLU-2 [-1, 64, 224, 224] 0 Conv2d-3 [-1, 64, 224, 224] 36,928 ReLU-4 [-1, 64, 224, 224] 0 MaxPool2d-5 [-1, 64, 112, 112] 0 Conv2d-6 [-1, 128, 112, 112] 73,856 ReLU-7 [-1, 128, 112, 112] 0 Conv2d-8 [-1, 128, 112, 112] 147,584 ReLU-9 [-1, 128, 112, 112] 0 MaxPool2d-10 [-1, 128, 56, 56] 0 Conv2d-11 [-1, 256, 56, 56] 295,168 ReLU-12 [-1, 256, 56, 56] 0 Conv2d-13 [-1, 256, 56, 56] 590,080 ReLU-14 [-1, 256, 56, 56] 0 Conv2d-15 [-1, 256, 56, 56] 590,080 ReLU-16 [-1, 256, 56, 56] 0 MaxPool2d-17 [-1, 256, 28, 28] 0 Conv2d-18 [-1, 512, 28, 28] 1,180,160 ReLU-19 [-1, 512, 28, 28] 0 Conv2d-20 [-1, 512, 28, 28] 2,359,808 ReLU-21 [-1, 512, 28, 28] 0 Conv2d-22 [-1, 512, 28, 28] 2,359,808 ReLU-23 [-1, 512, 28, 28] 0 MaxPool2d-24 [-1, 512, 14, 14] 0 Conv2d-25 [-1, 512, 14, 14] 2,359,808 ReLU-26 [-1, 512, 14, 14] 0 Conv2d-27 [-1, 512, 14, 14] 2,359,808 ReLU-28 [-1, 512, 14, 14] 0 Conv2d-29 [-1, 512, 14, 14] 2,359,808 ReLU-30 [-1, 512, 14, 14] 0 MaxPool2d-31 [-1, 512, 7, 7] 0 Linear-32 [-1, 4096] 102,764,544 ReLU-33 [-1, 4096] 0 Dropout-34 [-1, 4096] 0 Linear-35 [-1, 4096] 16,781,312 ReLU-36 [-1, 4096] 0 Dropout-37 [-1, 4096] 0
Linear-38 [-1, 1000] 4,097,000
Total params: 138,357,544 Trainable params: 138,357,544
Non-trainable params: 0
Input size (MB): 0.57 Forward/backward pass size (MB): 218.59 Params size (MB): 527.79
Estimated Total Size (MB): 746.96
```
Multiple Inputs
```python import torch import torch.nn as nn from torchsummary import summary
class SimpleConv(nn.Module): def init(self): super(SimpleConv, self).init() self.features = nn.Sequential( nn.Conv2d(1, 1, kernel_size=3, stride=1, padding=1), nn.ReLU(), )
def forward(self, x, y):
x1 = self.features(x)
x2 = self.features(y)
return x1, x2
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = SimpleConv().to(device)
summary(model, [(1, 16, 16), (1, 28, 28)]) ```
```
Layer (type) Output Shape Param #
================================================================ Conv2d-1 [-1, 1, 16, 16] 10 ReLU-2 [-1, 1, 16, 16] 0 Conv2d-3 [-1, 1, 28, 28] 10
ReLU-4 [-1, 1, 28, 28] 0
Total params: 20 Trainable params: 20
Non-trainable params: 0
Input size (MB): 0.77 Forward/backward pass size (MB): 0.02 Params size (MB): 0.00
Estimated Total Size (MB): 0.78
```
References
- The idea for this package sparked from this PyTorch issue.
- Thanks to @ncullen93 and @HTLife.
- For Model Size Estimation @jacobkimmel (details here)
License
pytorch-summary is MIT-licensed.
Owner
- Name: Shubham Chandel
- Login: sksq96
- Kind: user
- Twitter: sksq96
- Repositories: 13
- Profile: https://github.com/sksq96
Applied Scientist at @Microsoft working on natural language and code. Previously NYU, @IBM research, @amzn.
GitHub Events
Total
- Issues event: 6
- Watch event: 72
- Issue comment event: 3
- Pull request event: 2
- Fork event: 5
Last Year
- Issues event: 6
- Watch event: 72
- Issue comment event: 3
- Pull request event: 2
- Fork event: 5
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Shubham Chandel | s****z@g****m | 32 |
| Seyyed Hossein Hasanpour | c****7@g****m | 3 |
| greenmon | g****n@k****r | 2 |
| Naireen | n****2@l****m | 2 |
| Lloyd Hughes | h****d@g****m | 2 |
| xiaofeixia | t****z@q****m | 1 |
| peakashu | 3****u | 1 |
| Michael Churchill | 1****h | 1 |
| CGKim412 | 4****2 | 1 |
| Naireen Hussain | n****n@m****a | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 88
- Total pull requests: 30
- Average time to close issues: 4 months
- Average time to close pull requests: over 1 year
- Total issue authors: 87
- Total pull request authors: 27
- Average comments per issue: 2.02
- Average comments per pull request: 1.03
- Merged pull requests: 1
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 3
- Pull requests: 2
- Average time to close issues: about 1 hour
- Average time to close pull requests: N/A
- Issue authors: 2
- Pull request authors: 2
- Average comments per issue: 0.67
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- JoJo-ops (1)
- apivovarov (1)
- iambyd (1)
- MartinB47 (1)
- gchhablani (1)
- Tord-Zhang (1)
- rohan-paul (1)
- thinkerthinker (1)
- swapnil-lader (1)
- illfang (1)
- DataXujing (1)
- nathanpainchaud (1)
- sidwa (1)
- mangotree3 (1)
- londumas (1)
Pull Request Authors
- z-a-f (3)
- cijinsama (2)
- frgfm (2)
- intexcor (2)
- rohansh-tty (1)
- JasonnnW3000 (1)
- nathanpainchaud (1)
- mhamdan91 (1)
- normanrz (1)
- kice (1)
- royank (1)
- cainmagi (1)
- ClashLuke (1)
- GCS-ZHN (1)
- luisherrmann (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 2
-
Total downloads:
- pypi 139,390 last-month
- Total docker downloads: 10,565
-
Total dependent packages: 56
(may contain duplicates) -
Total dependent repositories: 2,252
(may contain duplicates) - Total versions: 8
- Total maintainers: 3
pypi.org: torchsummary
Model summary in PyTorch similar to `model.summary()` in Keras
- Homepage: https://github.com/sksq96/pytorch-summary
- Documentation: https://torchsummary.readthedocs.io/
- License: mit
-
Latest release: 1.5.1
published over 7 years ago
Rankings
spack.io: py-torchsummary
Keras has a neat API to view the visualization of the model which is very helpful while debugging your network. Here is a barebone code to try and mimic the same in PyTorch. The aim is to provide information complementary to, what is not provided by print(your_model) in PyTorch.
- Homepage: https://github.com/sksq96/pytorch-summary
- License: []
-
Latest release: 1.5.1
published almost 4 years ago