0

PyTorch: visualize trained filters and feature maps

I just wrote a simple code to visualize trained filters and feature maps of pytorch. For simplicity, the below code uses pretrained AlexNet but the code must work with any network with Conv2d layers.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import os
from itertools import product
import numpy as np

import imageio

import torch
import torch.nn as nn
import torchvision.models as models
from torch.autograd import Variable

def visualize_conv_filters(net, dirname, dataname):
    dirname = os.path.join(dirname, dataname)
    if not os.path.exists(dirname):
        os.mkdir(dirname)

    index_conv = 0
    for module in net.modules():
        if isinstance(module, nn.Conv2d):
            filters = module.weight.detach().numpy()

            shape = filters.shape
            filters_tile = np.zeros((shape[0]*shape[2],
                                     shape[1]*shape[3]))

            for i, j in product(range(shape[0]), range(shape[1])):
                filters_tile[i*shape[2]:(i+1)*shape[2],
                             j*shape[3]:(j+1)*shape[3]] = filters[i, j, :, :]
            filename = '%s_conv%d.png' % (dataname, index_conv)
            imageio.imwrite(os.path.join(dirname, filename),
                            filters_tile)

            index_conv += 1

def visualize_feature_maps(net, dirname, dataname, x):
    dirname_root = os.path.join(dirname, dataname)
    if not os.path.exists(dirname_root):
        os.mkdir(dirname_root)

    net.eval()
    for index, layer in enumerate(list(net.features)):
        dirname = os.path.join(dirname_root, 'layer'+str(index))
        if not os.path.exists(dirname):
            os.mkdir(dirname)

        x = layer(x)
        feature = x.detach().numpy()<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>
        for i in range(feature.shape[1]):
            filename = 'layer%d_feature%d.png' % (index, i)
            imageio.imwrite(os.path.join(dirname, filename),
                            feature[0, i, :, :])

if __name__ == '__main__':
    net = models.alexnet(pretrained=True)

    visualize_conv_filters(net, './data/alexnet', 'filters')

    x = Variable(torch.randn(1, 3, 224, 224))
    visualize_feature_maps(net, './data/alexnet', 'random', x)
Advertisements
0

Anaconda: install OpenCV 3.4.1 on Windows 10

conda install … opencv, conda install … opencv-python ruined my environment built on Windows 10. The error seemed to related to PyQt and authentication/path setting but couldn’t specify the true cause.

So, instead of using anaconda binary, I tried unofficial windows binary and it worked 🙂 The required steps are
1. Go to the site
2. Download .whl file appropriate for your environment, say RIGHT_BINARY.whl (opencv_python‑3.4.1‑cp36‑cp36m‑win_amd64.whl in my case)
3. install by pip install RIGHT_BINARY.whl

0

PyTorch: fine-tune a pre-trained model

See the tutorial.

Important note:

  1. torchvision provides several pre-trained models with their trained parameters.
    1. AlexNet, DenseNet, Inception, ResNet, VGG are available, see here.
  2. With pretrained=True, pre-trained parameters are available.
  3. The pre-trained models are classified into two types:
    1. (feature, classifier) model (AlexNet, DenseNet, VGG)
      1. This model has two sub-networks, feature and classifier.
      2. Each sub-network is implemented as a torch.nn.Sequential object.
    2. (…, fc) model (Inception, ResNet)
      1. This model has several components and the final fully-connected layer, accessible by net.fc.
  4. To custom final fully connected layer(s),
    1. define a pre-trained model with pretrained=True
    2. set all fixed parameters untrainable
    3. change final fully connected layer(s)
    4. give optimizer only trainable parameters
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision

if __name__ == '__main__':

    B = 4
    C = 3
    H = 224
    W = 224
    num_classes = 8
    x = torch.rand(B, C, H, W)
    y = torch.rand(B, num_classes)

    # load the pre-trained model
    net = torchvision.models.vgg16_bn(pretrained=True)

    print('#layers in vgg16.features:', len(net.features))
    print(net.features)
    print('#layers in vgg16.classifier:', len(net.classifier))
    print(net.classifier)

    # fix all pre-trained parameters
    for n, param in enumerate(net.features.parameters()):
        param.requires_grad = False

    # change last FC layer
    net.classifier[-1] = nn.Linear(in_features=net.classifier[-1].in_features,
                                   out_features=num_classes)
    for n, module in enumerate(net.classifier):
        print(n, module)

    # training settings
    criterion = nn.MSELoss()
    optimizer = optim.SGD(net.classifier.parameters(),
                          lr=0.001,
                          momentum=0.9)

    for epoch in range(100):
        optimizer.zero_grad()

        y_pred = net(x)
        loss = criterion(y_pred, y)
        loss.backward()
        optimizer.step()

        print('%d: %f' % (epoch, loss.item()))

    # test
    net.eval()
    y = torch.rand(B, num_classes)
    y_pred = net(x)
    print(y)
    print(y_pred)

Before FC layer customization:

0 Linear(in_features=25088, out_features=4096, bias=True)
1 ReLU(inplace)
2 Dropout(p=0.5)
3 Linear(in_features=4096, out_features=4096, bias=True)
4 ReLU(inplace)
5 Dropout(p=0.5)
6 Linear(in_features=4096, <strong>out_features=1000</strong>, bias=True)

After FC layer customization:

0 Linear(in_features=25088, out_features=4096, bias=True)
1 ReLU(inplace)
2 Dropout(p=0.5)
3 Linear(in_features=4096, out_features=4096, bias=True)
4 ReLU(inplace)
5 Dropout(p=0.5)
6 Linear(in_features=4096, <strong>out_features=8</strong>, bias=True)