Pytorch实现全连接层的操作

全连接神经网络(FC)

全连接神经网络是一种最基本的神经网络结构,英文为Full Connection,所以一般简称FC。

FC的准则很简单:神经网络中除输入层之外的每个节点都和上一层的所有节点有连接。

以上一次的MNIST为例

import torch
import torch.utils.data
from torch import optim
from torchvision import datasets
from torchvision.transforms import transforms
import torch.nn.functional as F
batch_size = 200
learning_rate = 0.001
epochs = 20
train_loader = torch.utils.data.DataLoader(
    datasets.MNIST('mnistdata', train=True, download=False,
                   transform=transforms.Compose([
                       transforms.ToTensor(),
                       transforms.Normalize((0.1307,), (0.3081,))
                   ])),
    batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
    datasets.MNIST('mnistdata', train=False, download=False,
                   transform=transforms.Compose([
                       transforms.ToTensor(),
                       transforms.Normalize((0.1307,), (0.3081,))
                   ])),
    batch_size=batch_size, shuffle=True)
w1, b1 = torch.randn(200, 784, requires_grad=True), torch.zeros(200, requires_grad=True)
w2, b2 = torch.randn(200, 200, requires_grad=True), torch.zeros(200, requires_grad=True)
w3, b3 = torch.randn(10, 200, requires_grad=True), torch.zeros(10, requires_grad=True)
torch.nn.init.kaiming_normal_(w1)
torch.nn.init.kaiming_normal_(w2)
torch.nn.init.kaiming_normal_(w3)
def forward(x):
    x = x@w1.t() + b1
    x = F.relu(x)
    x = x@w2.t() + b2
    x = F.relu(x)
    x = x@w3.t() + b3
    x = F.relu(x)
    return x
optimizer = optim.Adam([w1, b1, w2, b2, w3, b3], lr=learning_rate)
criteon = torch.nn.CrossEntropyLoss()
for epoch in range(epochs):
    for batch_idx, (data, target) in enumerate(train_loader):
        data = data.view(-1, 28*28)
        logits = forward(data)
        loss = criteon(logits, target)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        if batch_idx % 100 == 0:
            print('Train Epoch : {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx*len(data), len(train_loader.dataset),
                100.*batch_idx/len(train_loader), loss.item()
            ))
    test_loss = 0
    correct = 0
    for data, target in test_loader:
        data = data.view(-1, 28*28)
        logits = forward(data)
        test_loss += criteon(logits, target).item()
        pred = logits.data.max(1)[1]
        correct += pred.eq(target.data).sum()
    test_loss /= len(test_loader.dataset)
    print('\nTest set : Averge loss: {:.4f}, Accurancy: {}/{}({:.3f}%)'.format(
        test_loss, correct, len(test_loader.dataset),
        100.*correct/len(test_loader.dataset)
        ))

我们将每个w和b都进行了定义,并且自己写了一个forward函数。如果我们采用了全连接层,那么整个代码也会更加简介明了。

首先,我们定义自己的网络结构的类:

class MLP(nn.Module):
    def __init__(self):
        super(MLP, self).__init__()
        self.model = nn.Sequential(
            nn.Linear(784, 200),
            nn.LeakyReLU(inplace=True),
            nn.Linear(200, 200),
            nn.LeakyReLU(inplace=True),
            nn.Linear(200, 10),
            nn.LeakyReLU(inplace=True)
        )
    def forward(self, x):
        x = self.model(x)
        return x

它继承于nn.Moudle,并且自己定义里整个网络结构。

其中inplace的作用是直接复用存储空间,减少新开辟存储空间。

除此之外,它可以直接进行运算,不需要手动定义参数和写出运算语句,更加简便。

同时我们还可以发现,它自动完成了初试化,不需要像之前一样再手动写一个初始化了。

区分nn.Relu和F.relu()

前者是一个类的接口,后者是一个函数式接口。

前者都是大写的,并且调用的的时候需要先实例化才能使用,而后者是小写的可以直接使用。

最重要的是后者的自由度更高,更适合做一些自己定义的操作。

完整代码

import torch
import torch.utils.data
from torch import optim, nn
from torchvision import datasets
from torchvision.transforms import transforms
import torch.nn.functional as F
batch_size = 200
learning_rate = 0.001
epochs = 20
train_loader = torch.utils.data.DataLoader(
    datasets.MNIST('mnistdata', train=True, download=False,
                   transform=transforms.Compose([
                       transforms.ToTensor(),
                       transforms.Normalize((0.1307,), (0.3081,))
                   ])),
    batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
    datasets.MNIST('mnistdata', train=False, download=False,
                   transform=transforms.Compose([
                       transforms.ToTensor(),
                       transforms.Normalize((0.1307,), (0.3081,))
                   ])),
    batch_size=batch_size, shuffle=True)
class MLP(nn.Module):
    def __init__(self):
        super(MLP, self).__init__()
        self.model = nn.Sequential(
            nn.Linear(784, 200),
            nn.LeakyReLU(inplace=True),
            nn.Linear(200, 200),
            nn.LeakyReLU(inplace=True),
            nn.Linear(200, 10),
            nn.LeakyReLU(inplace=True)
        )
    def forward(self, x):
        x = self.model(x)
        return x
device = torch.device('cuda:0')
net = MLP().to(device)
optimizer = optim.Adam(net.parameters(), lr=learning_rate)
criteon = nn.CrossEntropyLoss().to(device)
for epoch in range(epochs):
    for batch_idx, (data, target) in enumerate(train_loader):
        data = data.view(-1, 28*28)
        data, target = data.to(device), target.to(device)
        logits = net(data)
        loss = criteon(logits, target)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        if batch_idx % 100 == 0:
            print('Train Epoch : {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx*len(data), len(train_loader.dataset),
                100.*batch_idx/len(train_loader), loss.item()
            ))
    test_loss = 0
    correct = 0
    for data, target in test_loader:
        data = data.view(-1, 28*28)
        data, target = data.to(device), target.to(device)
        logits = net(data)
        test_loss += criteon(logits, target).item()
        pred = logits.data.max(1)[1]
        correct += pred.eq(target.data).sum()
    test_loss /= len(test_loader.dataset)
    print('\nTest set : Averge loss: {:.4f}, Accurancy: {}/{}({:.3f}%)'.format(
        test_loss, correct, len(test_loader.dataset),
        100.*correct/len(test_loader.dataset)
        ))

补充:pytorch 实现一个隐层的全连接神经网络

torch.nn 实现 模型的定义,网络层的定义,损失函数的定义。

import torch
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
model = torch.nn.Sequential(
    torch.nn.Linear(D_in, H),
    torch.nn.ReLU(),
    torch.nn.Linear(H, D_out),
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-4
for t in range(500):
    # Forward pass: compute predicted y by passing x to the model. Module objects
    # override the __call__ operator so you can call them like functions. When
    # doing so you pass a Tensor of input data to the Module and it produces
    # a Tensor of output data.
    y_pred = model(x)
    # Compute and print loss. We pass Tensors containing the predicted and true
    # values of y, and the loss function returns a Tensor containing the
    # loss.
    loss = loss_fn(y_pred, y)
    print(t, loss.item())
    # Zero the gradients before running the backward pass.
    model.zero_grad()
    # Backward pass: compute gradient of the loss with respect to all the learnable
    # parameters of the model. Internally, the parameters of each Module are stored
    # in Tensors with requires_grad=True, so this call will compute gradients for
    # all learnable parameters in the model.
    loss.backward()
    # Update the weights using gradient descent. Each parameter is a Tensor, so
    # we can access its gradients like we did before.
    with torch.no_grad():
        for param in model.parameters():
            param -= learning_rate * param.grad

上面,我们使用parem= -= learning_rate* param.grad 手动更新参数。

使用torch.optim 自动优化参数。optim这个package提供了各种不同的模型优化方法,包括SGD+momentum, RMSProp, Adam等等。

optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for t in range(500):
    y_pred = model(x)
    loss = loss_fn(y_pred, y)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

以上为个人经验,希望能给大家一个参考,也希望大家多多支持我们。如有错误或未考虑完全的地方,望不吝赐教。

(0)

相关推荐

  • pytorch神经网络之卷积层与全连接层参数的设置方法

    当使用pytorch写网络结构的时候,本人发现在卷积层与第一个全连接层的全连接层的input_features不知道该写多少?一开始本人的做法是对着pytorch官网的公式推,但是总是算错. 后来发现,写完卷积层后可以根据模拟神经网络的前向传播得出这个. 全连接层的input_features是多少.首先来看一下这个简单的网络.这个卷积的Sequential本人就不再啰嗦了,现在看nn.Linear(???, 4096)这个全连接层的第一个参数该为多少呢? 请看下文详解. class AlexN

  • pytorch三层全连接层实现手写字母识别方式

    先用最简单的三层全连接神经网络,然后添加激活层查看实验结果,最后加上批标准化验证是否有效 首先根据已有的模板定义网络结构SimpleNet,命名为net.py import torch from torch.autograd import Variable import numpy as np import matplotlib.pyplot as plt from torch import nn,optim from torch.utils.data import DataLoader fro

  • Pytorch修改ResNet模型全连接层进行直接训练实例

    之前在用预训练的ResNet的模型进行迁移训练时,是固定除最后一层的前面层权重,然后把全连接层输出改为自己需要的数目,进行最后一层的训练,那么现在假如想要只是把 最后一层的输出改一下,不需要加载前面层的权重,方法如下: model = torchvision.models.resnet18(pretrained=False) num_fc_ftr = model.fc.in_features model.fc = torch.nn.Linear(num_fc_ftr, 224) model =

  • Pytorch实现全连接层的操作

    全连接神经网络(FC) 全连接神经网络是一种最基本的神经网络结构,英文为Full Connection,所以一般简称FC. FC的准则很简单:神经网络中除输入层之外的每个节点都和上一层的所有节点有连接. 以上一次的MNIST为例 import torch import torch.utils.data from torch import optim from torchvision import datasets from torchvision.transforms import transf

  • 浅谈tensorflow1.0 池化层(pooling)和全连接层(dense)

    池化层定义在tensorflow/python/layers/pooling.py. 有最大值池化和均值池化. 1.tf.layers.max_pooling2d max_pooling2d( inputs, pool_size, strides, padding='valid', data_format='channels_last', name=None ) inputs: 进行池化的数据. pool_size: 池化的核大小(pool_height, pool_width),如[3,3].

  • 关于pytorch中全连接神经网络搭建两种模式详解

    pytorch搭建神经网络是很简单明了的,这里介绍两种自己常用的搭建模式: import torch import torch.nn as nn first: class NN(nn.Module): def __init__(self): super(NN,self).__init__() self.model=nn.Sequential( nn.Linear(30,40), nn.ReLU(), nn.Linear(40,60), nn.Tanh(), nn.Linear(60,10), n

  • keras实现调用自己训练的模型,并去掉全连接层

    其实很简单 from keras.models import load_model base_model = load_model('model_resenet.h5')#加载指定的模型 print(base_model.summary())#输出网络的结构图 这是我的网络模型的输出,其实就是它的结构图 _________________________________________________________________________________________________

  • 一小时学会TensorFlow2之全连接层

    目录 概述 keras.layers.Dense keras.Squential 概述 全链接层 (Fully Connected Layer) 会把一个特质空间线性变换到另一个特质空间, 在整个网络中起到分类器的作用. keras.layers.Dense keras.layers.Dense可以帮助我们实现全连接. 格式: tf.keras.layers.Dense( units, activation=None, use_bias=True, kernel_initializer='glo

  • pytorch fine-tune 预训练的模型操作

    之一: torchvision 中包含了很多预训练好的模型,这样就使得 fine-tune 非常容易.本文主要介绍如何 fine-tune torchvision 中预训练好的模型. 安装 pip install torchvision 如何 fine-tune 以 resnet18 为例: from torchvision import models from torch import nn from torch import optim resnet_model = models.resne

  • Python利用全连接神经网络求解MNIST问题详解

    本文实例讲述了Python利用全连接神经网络求解MNIST问题.分享给大家供大家参考,具体如下: 1.单隐藏层神经网络 人类的神经元在树突接受刺激信息后,经过细胞体处理,判断如果达到阈值,则将信息传递给下一个神经元或输出.类似地,神经元模型在输入层输入特征值x之后,与权重w相乘求和再加上b,经过激活函数判断后传递给下一层隐藏层或输出层. 单神经元的模型只有一个求和节点(如左下图所示).全连接神经网络(Full Connected Networks)如右下图所示,中间层有多个神经元,并且每层的每个

随机推荐