Deep learning neural network - MATLAB (2025)

Deep learning neural network

Since R2019b

expand all in page

Description

A dlnetwork object specifies a deep learning neural network architecture

Tip

For most deep learning tasks, you can use a pretrained neural network and adapt it to your own data. For an example showing how to use transfer learning to retrain a convolutional neural network to classify a new set of images, see Retrain Neural Network to Classify New Images. Alternatively, you can create and train neural networks from scratch using the trainnet and trainingOptions functions.

If the trainingOptions function does not provide the training options that you need for your task, then you can create a custom training loop using automatic differentiation. To learn more, see Train Network Using Custom Training Loop.

If the trainnet function does not provide the loss function that you need for your task, then you can specify a custom loss function to the trainnet as a function handle. For loss functions that require more inputs than the predictions and targets (for example, loss functions that require access to the neural network or additional inputs), train the model using a custom training loop. To learn more, see Train Network Using Custom Training Loop.

If Deep Learning Toolbox™ does not provide the layers you need for your task, then you can create a custom layer. To learn more, see Define Custom Deep Learning Layers. For models that cannot be specified as networks of layers, you can define the model as a function. To learn more, see Train Network Using Model Function.

For more information about which training method to use for which task, see Train Deep Learning Model in MATLAB.

Creation

Syntax

net = dlnetwork

net = dlnetwork(layers)

net = dlnetwork(layers,OutputNames=names)

net = dlnetwork(layers,Initialize=tf)

net = dlnetwork(layers,X1,...,XN)

net = dlnetwork(layers,X1,...,XN,OutputNames=names)

net = dlnetwork(prunableNet)

Description

Empty Network

example

net = dlnetwork creates a dlnetwork object with no layers. Use this syntax to create a neural network from scratch. (since R2024a)

Network with Input Layers

example

net = dlnetwork(layers) creates neural network using the specified layers and initializes any unset learnable and state parameters. This syntax uses the input layer in layers to determine the size and format of the learnable and state parameters of the neural network.

Use this syntax when layers defines a complete single-input neural network, has layers arranged in series, and has an input layer.

net = dlnetwork(layers,OutputNames=names) also sets the OutputNames property. The OutputNames property specifies the layers or layer outputs that correspond to network outputs.

Use this syntax when layers defines a complete single-input multi-output neural network, has layers arranged in series, and has an input layer.

net = dlnetwork(layers,Initialize=tf) specifies whether to initialize the learnable and state parameters of the neural network. When tf is 1, (true), this syntax is equivalent to net = dlnetwork(layers). When tf is 0 (false), this syntax is equivalent to creating an empty network and then adding layers using the addLayers function.

Network With Unconnected Inputs

net = dlnetwork(layers,X1,...,XN) creates a neural network using the specified layers and initializes any unset learnable and state parameters. This syntax uses the network data layout objects or example inputs X1,...,XN to determine the size and format of the learnable parameters and state values of the neural network, where N is the number of network inputs.

Use this syntax when layers defines a complete neural network, has layers arranged in series, and has inputs that are not connected to input layers.

net = dlnetwork(layers,X1,...,XN,OutputNames=names) also sets the OutputNames property.

Use this syntax when layers defines a complete neural network, has multiple outputs, has layers arranged in series, and has inputs that are not connected to input layers.

Prunable Network

net = dlnetwork(prunableNet) converts a TaylorPrunableNetwork to dlnetwork object by removing filters selected for pruning from the convolution layers of prunableNet and returns a compressed dlnetwork object that has fewer learnable parameters and is smaller in size.

Input Arguments

expand all

Network layers, specified as a Layer array.

The software connects the layers in series.

For a list of supported layers, see List of Deep Learning Layers.

Example data or data layouts to use to determine the size and formats of learnable and state parameters, specified as formatted dlarray objects or formatted networkDataLayout objects. The software propagates X1,...XN through the network to determine the appropriate sizes and formats of the learnable and state parameters of the dlnetwork object and initializes any unset learnable or state parameters.

The order of X1,...,XN must match the order of the layers that require inputs in layers.

Note

Automatic initialization uses only the size and format information of the input data. For initialization that depends on the values on the input data, you must initialize the learnable parameters manually.

Flag to initialize learnable and state parameters, specified as one of these values:

  • 1 (true) — Initialize the learnable and state parameters. The software uses the input layer in layers to determine the sizes of the learnable and state parameters.

  • 0 (false) — Do not initialize the learnable and state parameters. Use this option when:

    • You expect to make further edits to the neural network. For example, when you expect to add or remove layers and connections.

    • You use the network in a custom layer and you want to use a custom initialize function.

Neural network prediction and custom training loops requires an initialized network. To initialize an uninitialized network, use the initialize function.

Network for pruning by using first-order Taylor approximation, specified as a TaylorPrunableNetwork object.

Pruning a deep neural network requires the Deep Learning Toolbox Model Quantization Library support package. This support package is a free add-on that you can download using the Add-On Explorer. Alternatively, see Deep Learning Toolbox Model Quantization Library.

Properties

expand all

Network layers, specified as a Layer array.

Layer connections, specified as a table with two columns.

Each table row represents a connection in the neural network. The first column, Source, specifies the source of each connection. The second column, Destination, specifies the destination of each connection. The connection sources and destinations are either layer names or have the form "layerName/IOName", where "IOName" is the name of the layer input or output.

Data Types: table

Network learnable parameters, specified as a table with three columns:

  • Layer — Layer name, specified as a string scalar.

  • Parameter — Parameter name, specified as a string scalar.

  • Value — Value of parameter, specified as a dlarray object.

The network learnable parameters contain the features learned by the network. For example, the weights of convolution and fully connected layers.

The learnable parameter values can be complex-valued. (since R2024a)

Data Types: table

Network state, specified as a table.

The network state is a table with three columns:

  • Layer – Layer name, specified as a string scalar.

  • Parameter – State parameter name, specified as a string scalar.

  • Value – Value of state parameter, specified as a dlarray object.

Layer states contain information calculated during the layer operation to be retained for use in subsequent forward passes of the layer. For example, the cell state and hidden state of LSTM layers, or running statistics in batch normalization layers.

For recurrent layers, such as LSTM layers, with the HasStateInputs property set to 1 (true), the state table does not contain entries for the states of that layer.

During training or inference, you can update the network state using the output of the forward and predict functions.

The state values can be complex-valued. (since R2024a)

Data Types: table

This property is read-only.

Names of the network inputs, specified as a cell array of character vectors.

Network inputs are the input layers and the unconnected inputs of layers.

For input layers and layers with a single input, the input name is the name of the layer. For layers with multiple inputs, the input name is "layerName/inputName", where layerName is the name of the layer and inputName is the name of the layer input.

Data Types: cell

Names of the network outputs, specified as a cell array of character vectors.

For layers with a single output, the output name is the name of the layer. For layers with multiple outputs, the output name is "layerName/outputName", where layerName is the name of the layer and outputName is the name of the layer output.

If you do not specify the output names, then the software sets the OutputNames property to the layers with unconnected outputs.

The predict and forward functions, by default, return the data output by the layers given by the OutputNames property.

Data Types: cell

This property is read-only.

Flag for initialized network, specified as one of these values:

  • 1 (true) — Network is initialized and is ready for prediction and custom training loops. If you change the values of the learnable or state parameters, then the network remains initialized.

  • 0 (false) — Network is not initialized and is not ready prediction or custom training loops. To initialize an uninitialized network, use the initialize function.

Data Types: logical

Object Functions

addInputLayerAdd input layer to network
addLayersAdd layers to neural network
removeLayersRemove layers from neural network
connectLayersConnect layers in neural network
disconnectLayersDisconnect layers in neural network
replaceLayerReplace layer in neural network
getLayerLook up a layer by name or path
expandLayersExpand network layers
groupLayersGroup layers into network layers
summaryPrint network summary
plotPlot neural network architecture
initializeInitialize learnable and state parameters of a dlnetwork
predictCompute deep learning network output for inference
forwardCompute deep learning network output for training
resetStateReset state parameters of neural network
setL2FactorSet L2 regularization factor of layer learnable parameter
setLearnRateFactorSet learn rate factor of layer learnable parameter
getLearnRateFactorGet learn rate factor of layer learnable parameter
getL2FactorGet L2 regularization factor of layer learnable parameter

Examples

collapse all

Create Neural Network from Scratch

Open Live Script

Define a two-output neural network that predicts both categorical labels and numeric values given 2-D images as input.

Specify the number of classes and responses.

numClasses = 10;numResponses = 1;

Create an empty neural network.

net = dlnetwork;

Define the layers of the main branch of the network and the softmax output.

layers = [ imageInputLayer([28 28 1],Normalization="none") convolution2dLayer(5,16,Padding="same") batchNormalizationLayer reluLayer(Name="relu_1") convolution2dLayer(3,32,Padding="same",Stride=2) batchNormalizationLayer reluLayer convolution2dLayer(3,32,Padding="same") batchNormalizationLayer reluLayer additionLayer(2,Name="add") fullyConnectedLayer(numClasses) softmaxLayer(Name="softmax")];net = addLayers(net,layers);

Add the skip connection.

layers = [ convolution2dLayer(1,32,Stride=2,Name="conv_skip") batchNormalizationLayer reluLayer(Name="relu_skip")];net = addLayers(net,layers);net = connectLayers(net,"relu_1","conv_skip");net = connectLayers(net,"relu_skip","add/in2");

Add the fully connected layer for the regression output.

layers = fullyConnectedLayer(numResponses,Name="fc_2");net = addLayers(net,layers);net = connectLayers(net,"add","fc_2");

View the neural network in a plot.

figureplot(net)

Deep learning neural network - MATLAB (1)

Convert Layer Array to Neural Network

Open Live Script

If you have a layer that defines a complete single-input neural network, has layers arranged in series, and has an input layer, then you can convert the layer array to a dlnetwork object directly.

Specify an LSTM network as a layer array.

layers = [ sequenceInputLayer(12) lstmLayer(100) fullyConnectedLayer(9) softmaxLayer];

Convert the layer array to a dlnetwork object. Because the layer array has an input layer and no other inputs, the software initializes the neural network.

net = dlnetwork(layers)
net = dlnetwork with properties: Layers: [4x1 nnet.cnn.layer.Layer] Connections: [3x2 table] Learnables: [5x3 table] State: [2x3 table] InputNames: {'sequenceinput'} OutputNames: {'softmax'} Initialized: 1 View summary with summary.

Freeze Learnable Parameters

Open Live Script

Load a pretrained network.

net = imagePretrainedNetwork;

The Learnables property of the dlnetwork object is a table that contains the learnable parameters of the network. The table includes parameters of nested layers in separate rows. View the first few rows of the learnables table.

learnables = net.Learnables;head(learnables)
 Layer Parameter Value __________________ _________ ___________________ "conv1" "Weights" {3x3x3x64 dlarray} "conv1" "Bias" {1x1x64 dlarray} "fire2-squeeze1x1" "Weights" {1x1x64x16 dlarray} "fire2-squeeze1x1" "Bias" {1x1x16 dlarray} "fire2-expand1x1" "Weights" {1x1x16x64 dlarray} "fire2-expand1x1" "Bias" {1x1x64 dlarray} "fire2-expand3x3" "Weights" {3x3x16x64 dlarray} "fire2-expand3x3" "Bias" {1x1x64 dlarray}

To freeze the learnable parameters of the network, loop over the learnable parameters and set the learn rate to 0 using the setLearnRateFactor function.

factor = 0;numLearnables = size(learnables,1);for i = 1:numLearnables layerName = learnables.Layer(i); parameterName = learnables.Parameter(i); net = setLearnRateFactor(net,layerName,parameterName,factor);end

To use the updated learn rate factors when training, you must pass the dlnetwork object to the update function in the custom training loop. For example, use the command

[net,velocity] = sgdmupdate(net,gradients,velocity);

Extended Capabilities

Version History

Introduced in R2019b

expand all

Create an empty neural network using the dlnetwork function with no input arguments. Use empty neural networks as a starting point for building neural networks from scratch.

The values in the Learnables and State properties can be complex-valued.

See Also

trainnet | trainingOptions | dlarray | dlgradient | dlfeval | forward | predict | initialize | TaylorPrunableNetwork

Topics

  • Retrain Neural Network to Classify New Images
  • Train Neural Network with Tabular Data
  • Train Network Using Custom Training Loop
  • Train Generative Adversarial Network (GAN)
  • Define Custom Training Loops, Loss Functions, and Networks

MATLAB Command

You clicked a link that corresponds to this MATLAB command:

 

Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.

Deep learning neural network - MATLAB (2)

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list:

Americas

  • América Latina (Español)
  • Canada (English)
  • United States (English)

Europe

  • Belgium (English)
  • Denmark (English)
  • Deutschland (Deutsch)
  • España (Español)
  • Finland (English)
  • France (Français)
  • Ireland (English)
  • Italia (Italiano)
  • Luxembourg (English)
  • Netherlands (English)
  • Norway (English)
  • Österreich (Deutsch)
  • Portugal (English)
  • Sweden (English)
  • Switzerland
    • Deutsch
    • English
    • Français
  • United Kingdom (English)

Asia Pacific

  • Australia (English)
  • India (English)
  • New Zealand (English)
  • 中国
  • 日本 (日本語)
  • 한국 (한국어)

Contact your local office

Deep learning neural network - MATLAB (2025)

FAQs

How do I use a deep neural network in MATLAB? ›

Create Simple Deep Learning Neural Network for Classification
  1. Load and explore image data.
  2. Define the neural network architecture.
  3. Specify training options.
  4. Train the neural network.
  5. Predict the labels of new data and calculate the classification accuracy.

Can MATLAB be used for deep learning? ›

With just a few lines of MATLAB code, you can incorporate deep learning into your applications whether you're designing algorithms, preparing and labeling data, or generating code and deploying to embedded systems.

Can MATLAB do neural networks? ›

With just a few lines of code, you can create neural networks in MATLAB without being an expert. You can get started quickly, train and visualize neural network models, and integrate neural networks into your existing system and deploy them to servers, enterprise systems, clusters, clouds, and embedded devices.

Can neural networks be used for deep learning? ›

Deep learning models can recognize data patterns like complex pictures, text, and sounds to produce accurate insights and predictions. A neural network is the underlying technology in deep learning.

What algorithm is used in deep neural network? ›

1. Convolutional Neural Networks (CNNs) CNNs are a deep learning algorithm that processes structured grid data like images. They have succeeded in image classification, object detection, and face recognition tasks.

How do I run a deep neural network? ›

Brian Cha
  1. Step 1 - Identify the appropriate deep learning function. ...
  2. Step 2 - Select a framework. ...
  3. Step 3 - Preparing training data for the neural network. ...
  4. Step 4 - Train and validate the neural network to ensure accuracy. ...
  5. Step 5 - Deploy the neural network and run inference on new data.
Jun 15, 2020

Does NASA use MATLAB? ›

The team at NASA Ames worked with NASA's Johnson Space Center in Houston to install MATLAB, Simulink, and related products on laptops aboard the space station. MATLAB and Simulink passed a rigorous security, performance, and reliability review, and their use on the space station was approved.

Can we implement CNN in MATLAB? ›

Using MATLAB® with Deep Learning Toolbox™ enables you to design, train, and deploy CNNs.

Is MATLAB used in neuroscience? ›

Neuroscientists use MATLAB and Simulink to process and analyze experimental data, drive experiments, and simulate models of brain circuits.

Is ChatGPT deep learning? ›

ChatGPT is built on the GPT-3.5 architecture, which utilizes a transformer-based deep learning algorithm.

What neural network Cannot learn? ›

Neural networks learn by initially processing several large sets of labeled or unlabeled data. By using these examples, they can then process unknown inputs more accurately.

What is a fully connected neural network in MATLAB? ›

The first fully connected layer of the neural network has a connection from the network input (predictor data X ), and each subsequent layer has a connection from the previous layer. Each fully connected layer multiplies the input by a weight matrix ( LayerWeights ) and then adds a bias vector ( LayerBiases ).

How to initialize neural network MATLAB? ›

Examples
  1. x = [0 1 0 1; 0 0 1 1]; t = [0 0 0 1]; net = perceptron; net = configure(net,x,t); net.iw{1,1} net.b{1} Train the perceptron to alter its weight and bias values.
  2. net = train(net,x,t); net.iw{1,1} net.b{1} init reinitializes those weight and bias values.
  3. net = init(net); net.iw{1,1} net.b{1}

How to import data in MATLAB Deep Network Designer? ›

In Deep Network Designer, you can import image classification data from an image datastore or a folder containing subfolders of images from each class. Select an import method based on the type of datastore you are using. Select Import Data > Import Image Classification Data. Select Import Data > Import Custom Data.

How to use neural network for prediction in MATLAB? ›

Predict Test Set Response Using Regression Neural Network

Load the patients data set. Create a table from the data set. Each row corresponds to one patient, and each column corresponds to a diagnostic variable. Use the Systolic variable as the response variable, and the rest of the variables as predictors.

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Greg O'Connell

Last Updated:

Views: 6163

Rating: 4.1 / 5 (62 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Greg O'Connell

Birthday: 1992-01-10

Address: Suite 517 2436 Jefferey Pass, Shanitaside, UT 27519

Phone: +2614651609714

Job: Education Developer

Hobby: Cooking, Gambling, Pottery, Shooting, Baseball, Singing, Snowboarding

Introduction: My name is Greg O'Connell, I am a delightful, colorful, talented, kind, lively, modern, tender person who loves writing and wants to share my knowledge and understanding with you.