Blog - Neural Network Console 09月25日 18:02
神经网络控制台更新介绍
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

索尼更新了神经网络控制台Windows版本,引入了多项新功能,包括支持导出到ONNX、NNP、NNB文件格式,支持从这些格式导入模型,支持混合精度训练等。这些功能使得用户可以更方便地在不同深度学习框架中复用模型,提高训练效率。此外,还增加了上传数据集到云端、添加新层、优化器和学习率调度器等功能,以及新的示例项目,如简单的语义分割、Fashion-MNIST、CIFAR-10和CIFAR-100等,帮助用户更轻松地进行训练和推理。

📌 导出至ONNX、NNP、NNB文件格式:用户可以将训练好的模型导出为ONNX、NNP、NNB文件格式,这些格式是开放的深度学习模型文件格式,兼容多种深度学习框架和库,使得模型可以在其他框架中复用,或者使用芯片供应商提供的优化推理环境实现高速推理。

📌 导入ONNX、NNP文件:用户现在可以在GUI中加载带有学习系数的ONNX和NNP文件,并复用它们。导入设置可以通过右键点击编辑选项卡并执行导入菜单来完成。

📌 混合精度训练:NVIDIA的Pascal及更高代GPU支持16位(半精度)浮点数计算。在这些GPU上,用户只需在项目全局配置中将精度项从浮点转换为半精度,即可实现混合精度训练。混合精度训练的优点包括:参数和缓冲区的内存大小可以减半,从而在有限的GPU内存中训练更大的神经网络;在最大GPU内存情况下,可以翻倍批处理大小,提高计算效率(训练速度);在Volta及更高代GPU上,可以使用Tensor Core进一步提高训练速度。

📌 上传数据集至云端:用户现在可以直接从神经网络控制台Windows中的数据集管理中的数据集预览部分上传数据集到神经网络控制台云端。通过轻松上传数据,用户可以体验云端丰富的GPU资源。

📌 添加新层、优化器和学习率调度器:神经网络库中新增了24种层、优化器(AMSGRAD)和3种学习率调度器,为用户提供更多选择和灵活性。

📌 新示例项目:新增了简单的语义分割、Fashion-MNIST、CIFAR-10、CIFAR-100等示例项目,用户可以下载数据集并通过简单命令进行训练和推理。

We have updated Neural Network Console Windows. We would like to introduce new functionalities and their usages in this post.

・Support Exporting to ONNX and NNB files
・Support Importing from ONNX and NNP files
・Support Mixed-Precision Training
・Other functionalities and improvements
 

1. Support Exporting to ONNX and NNB files

Cloud version has already been able to export to and download as ONNX, NNP, NNB files, and now Windows version is also able to export to these formats.
Setting export format can be done by right-button clicking on training results and executing Export from the menu.

 

ONNX is an open file format for deep learning models, and a large number of deep learning frameworks/libraries released from various companies are compatible with it.

ONNX
https://onnx.ai/

 

By exporting to ONNX file format, it is now possible to reuse the model trained on Neural Network Console Windows in other deep learning frameworks, or to implement high-speed inference by using optimized inference environments from chip vendors.

 

NNP is a file format Neural Network Libraries, an open-source deep learning framework available on GitHub that can be used with Python or C++.

Neural Network Libraries
https://nnabla.org/

Neural Network Libraries (GitHub)
https://github.com/sony/nnabla

 

NNB is a file format for NNabla C Runtime, a reference C library for inference written almost entirely in pure C.

NNabla C Runtime
https://github.com/sony/nnabla-c-runtime

By enabling export to NNB, models trained with Neural Network Console Windows can be implemented on various embedded devices supporting C language, including SPRESENSE.

SPRESENSE
https://developer.sony.com/develop/spresense/

 

2. Support Importing from ONNX and NNP files

Users can now load ONNX and NNP files on GUI with learned coefficients, and reuse them. Import setting can be performed by right-button clicking EDIT tab and executing Import from the menu.

3. Mixed-Precision Training

NVIDIA GPUs from Pascal and later generations support computations with 16-bit (half precision, fp 16, half format) floating points. With these GPUs, users can implement mixed-precision training by simply converting Precision item from Float to Half on Global Config of the project.

Mixed-Precision training has the following merits:
・Since the memory size for parameters and the buffer can be suppressed by half, larger neural networks can be trained within limited GPU memory.
・When training with maximum GPU memory, batch size can be doubled, improving computational efficiency (training speed). Furthermore, training speed can be improved with Tensor Core available on GPUs from Volta and later generations

 

While mixed-precision training can potentially lead to performance degradation due to less precise computation, it has been shown that using loss scaling can avoid such problem in most cases.
http://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html

 

In Neural Network Console, loss scaling can be implemented easily via multiplying the loss by a scale by adding MulScalar layer after loss function, and setting learning rate as 1 / scalar.


※Mixed-precision training can fully exhibit its efficiency when the target neural network is a large one that takes up the maximum GPU memory. Note that, for small networks in most of the sample projects, or when the training data are too large compared to computational capacity (taking time to load the data), the effect of high-speed training may not be shown explicitly.

 

4. Other functionalities / improvements

We have also implemented following functionalities /improvements.

・Upload dataset to Neural Network Console Cloud.

Users can now directly upload the dataset to Neural Network Console Cloud from the dataset preview in dataset management in Neural Network Windows. By uploading the data easily, users can experience the rich amount of GPU resources available in Neural Network Console Cloud.

 

・Addition of new layers, optimizer, and learning rate scheduler New features in Neural Network Libraries, including 24 layers, optimizer (AMSGRAD), and 3 learning rate schedulers, have been added.

 

・New Sample Project
New sample projects, including simple semantic segmentation, Fashion-MNIST, CIFAR-10, CIFAR-100, have been added, where users can download the dataset and perform training/inference by simple commands.
samples\sample_project\tutorial\semantic_segmentation
samples\sample_project\image_recognition
We will continue to add new sample projects.

 

We will continue to improve Neural Network Console, and we are also looking forward to getting feedbacks from the users, so that we can make proper improvements quickly!

Neural Network Console Windows
https://dl.sony.com/app/

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

神经网络控制台 深度学习 ONNX NNP NNB 混合精度训练 Windows版本更新
相关文章