昇思MindSpore是一个全场景深度学习框架,旨在实现易开发、高效执行、全场景统一部署三大目标。
其中,易开发表现为API友好、调试难度低;高效执行包括计算效率、数据预处理效率和分布式训练效率;全场景则指框架同时支持云、边缘以及端侧场景。
昇思MindSpore总体架构:
昇腾计算,是基于昇腾系列处理器构建的全栈AI计算基础设施及应用,包括昇腾Ascend系列芯片、Atlas系列硬件、CANN芯片使能、MindSpore AI框架、ModelArts、MindX应用使能等。
通过MindSpore的API来快速实现一个简单的深度学习模型。
import mindspore
from mindspore import nn
from mindspore.dataset import vision, transforms
from mindspore.dataset import MnistDataset
MindSpore提供基于Pipeline的数据引擎,通过数据集(Dataset)和数据变换(Transforms)实现高效的数据预处理。
from download import download
url = "https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/" \
"notebook/datasets/MNIST_Data.zip"
path = download(url, "./", kind="zip", replace=True)
数据下载完成后,获得数据集对象。
train_dataset = MnistDataset('MNIST_Data/train')
test_dataset = MnistDataset('MNIST_Data/test')
def datapipe(dataset, batch_size):
image_transforms = [
vision.Rescale(1.0 / 255.0, 0),
vision.Normalize(mean=(0.1307,), std=(0.3081,)),
vision.HWC2CHW()
]
label_transform = transforms.TypeCast(mindspore.int32)
dataset = dataset.map(image_transforms, 'image')
dataset = dataset.map(label_transform, 'label')
dataset = dataset.batch(batch_size)
return dataset
train_dataset = datapipe(train_dataset, 64)
test_dataset = datapipe(test_dataset, 64)
使用create_tuple_iterator 或create_dict_iterator对数据集进行迭代访问,查看数据和标签的shape和datatype。
for image, label in test_dataset.create_tuple_iterator():
print(f"Shape of image [N, C, H, W]: {image.shape} {image.dtype}")
print(f"Shape of label: {label.shape} {label.dtype}")
break
#
Shape of image [N, C, H, W]: (64, 1, 28, 28) Float32
Shape of label: (64,) Int32
for data in test_dataset.create_dict_iterator():
print(f"Shape of image [N, C, H, W]: {data['image'].shape} {data['image'].dtype}")
print(f"Shape of label: {data['label'].shape} {data['label'].dtype}")
break
#
Shape of image [N, C, H, W]: (64, 1, 28, 28) Float32
Shape of label: (64,) Int32
mindspore.nn
类是构建所有网络的基类,也是网络的基本单元。当用户需要自定义网络时,可以继承nn.Cell
类,并重写__init__
方法和construct
方法。__init__
包含所有网络层的定义,construct
中包含数据(Tensor)的变换过程。
class Network(nn.Cell):
def __init__(self):
super().__init__()
self.flatten = nn.Flatten()
self.dense_relu_sequential = nn.SequentialCell(
nn.Dense(28*28, 512),
nn.ReLU(),
nn.Dense(512, 512),
nn.ReLU(),
nn.Dense(512, 10)
)
def construct(self, x):
x = self.flatten(x)
logits = self.dense_relu_sequential(x)
return logits
model = Network()
print(model)
# 打印:
Network<
(flatten): Flatten<>
(dense_relu_sequential): SequentialCell<
(0): Dense<input_channels=784, output_channels=512, has_bias=True>
(1): ReLU<>
(2): Dense<input_channels=512, output_channels=512, has_bias=True>
(3): ReLU<>
(4): Dense<input_channels=512, output_channels=10, has_bias=True>
>
>
在模型训练中,一个完整的训练过程需要实现以下三步:
MindSpore使用函数式自动微分机制,因此针对上述步骤需要实现:
loss_fn = nn.CrossEntropyLoss()
optimizer = nn.SGD(model.trainable_params(), 1e-2)
# 1. Define forward function
def forward_fn(data, label):
logits = model(data)
loss = loss_fn(logits, label)
return loss, logits
# 2. Get gradient function
grad_fn = mindspore.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)
# 3. Define function of one-step training
def train_step(data, label):
(loss, _), grads = grad_fn(data, label)
optimizer(grads)
return loss
def train(model, dataset):
size = dataset.get_dataset_size()
model.set_train()
for batch, (data, label) in enumerate(dataset.create_tuple_iterator()):
loss = train_step(data, label)
if batch % 100 == 0:
loss, current = loss.asnumpy(), batch
print(f"loss: {loss:>7f} [{current:>3d}/{size:>3d}]")
除训练外,我们定义测试函数,用来评估模型的性能。
def test(model, dataset, loss_fn):
num_batches = dataset.get_dataset_size()
model.set_train(False)
total, test_loss, correct = 0, 0, 0
for data, label in dataset.create_tuple_iterator():
pred = model(data)
total += len(data)
test_loss += loss_fn(pred, label).asnumpy()
correct += (pred.argmax(1) == label).asnumpy().sum()
test_loss /= num_batches
correct /= total
print(f"Test: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
训练过程需多次迭代数据集,一次完整的迭代称为一轮(epoch)。在每一轮,遍历训练集进行训练,结束后使用测试集进行预测。打印每一轮的loss值和预测准确率(Accuracy),可以看到loss在不断下降,Accuracy在不断提高。
epochs = 3
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(model, train_dataset)
test(model, test_dataset, loss_fn)
print("Done!")
# 打印:
Epoch 1
-------------------------------
loss: 2.295042 [ 0/938]
loss: 1.709892 [100/938]
loss: 0.858865 [200/938]
loss: 0.645255 [300/938]
loss: 0.485651 [400/938]
loss: 0.564807 [500/938]
loss: 0.286971 [600/938]
loss: 0.533787 [700/938]
loss: 0.295917 [800/938]
loss: 0.310398 [900/938]
Test:
Accuracy: 90.8%, Avg loss: 0.320189
Epoch 2
-------------------------------
loss: 0.409654 [ 0/938]
loss: 0.317387 [100/938]
loss: 0.203188 [200/938]
loss: 0.289398 [300/938]
loss: 0.280859 [400/938]
loss: 0.348946 [500/938]
loss: 0.251603 [600/938]
loss: 0.277725 [700/938]
loss: 0.196523 [800/938]
loss: 0.212989 [900/938]
Test:
Accuracy: 92.6%, Avg loss: 0.256554
Epoch 3
-------------------------------
loss: 0.356105 [ 0/938]
loss: 0.277434 [100/938]
loss: 0.197128 [200/938]
loss: 0.344947 [300/938]
loss: 0.218666 [400/938]
loss: 0.218713 [500/938]
loss: 0.209656 [600/938]
loss: 0.114822 [700/938]
loss: 0.275429 [800/938]
loss: 0.109844 [900/938]
Test:
Accuracy: 93.8%, Avg loss: 0.213427
Done!
模型训练完成后,需要将其参数进行保存。
mindspore.save_checkpoint(model, "model.ckpt")
print("Saved Model to model.ckpt")
加载保存的权重分为两步:
model = Network()
# Load checkpoint and load parameter to model
param_dict = mindspore.load_checkpoint("model.ckpt")
param_not_load, _ = mindspore.load_param_into_net(model, param_dict)
print(param_not_load)
model.set_train(False)
for data, label in test_dataset:
pred = model(data)
predicted = pred.argmax(1)
print(f'Predicted: "{predicted[:10]}", Actual: "{label[:10]}"')
break
# Predicted: "[6 2 6 5 9 6 0 0 5 6]", Actual: "[6 2 6 5 9 6 0 0 5 6]"
通过使用华为昇腾AI技术栈,感觉跟Pytorch等流行深度学习框架有异曲同工之妙,无论是模型的搭建,数据传播还是最后的保存提交,都有效率很高的感觉,MindSpore的API设计倾向于更简洁和结构化,比如nn.SequentialCell
用于构建顺序层,而nn.Flatten
用于展平张量,还对对移动和嵌入式设备上的深度学习模型部署进行了优化,提供了轻量级的MindSpore Lite版本;PyTorch拥有庞大的社区和丰富的生态系统,但是MindSpore作为一个较新的框架,其社区和生态系统仍在发展中,而且华为的支持和投入正在逐步加强其影响力。