Predict responses using a trained deep learning neural network
You can make predictions using a trained neural network for deep learning on
either a CPU or GPU. Using a GPU requires
Parallel Computing Toolbox™ and a supported GPU device. For information on supported devices, see GPU Support by Release (Parallel Computing Toolbox). Specify the hardware requirements using the
ExecutionEnvironment
namevalue pair argument.
[YPred1,...,YPredM] = predict(___)
predicts
responses for the M
outputs of a multioutput network using
any of the previous syntaxes. The output YPredj
corresponds
to the network output net.OutputNames(j)
. To return
categorical outputs for the classification output layers, set the 'ReturnCategorical'
option to true
.
___ = predict(___,
predicts responses with additional options specified by one or more namevalue
pair arguments.Name,Value
)
Tip
When making predictions with sequences of different lengths, the minibatch size can impact the amount of padding added to the input data which can result in different predicted values. Try using different values to see which works best with your network. To specify minibatch size and padding options, use the 'MiniBatchSize'
and 'SequenceLength'
options, respectively.
Load the sample data.
[XTrain,YTrain] = digitTrain4DArrayData;
digitTrain4DArrayData
loads the digit training set as 4D array data. XTrain
is a 28by28by1by5000 array, where 28 is the height and 28 is the width of the images. 1 is the number of channels and 5000 is the number of synthetic images of handwritten digits. YTrain
is a categorical vector containing the labels for each observation.
Construct the convolutional neural network architecture.
layers = [ ... imageInputLayer([28 28 1]) convolution2dLayer(5,20) reluLayer maxPooling2dLayer(2,'Stride',2) fullyConnectedLayer(10) softmaxLayer classificationLayer];
Set the options to default settings for the stochastic gradient descent with momentum.
options = trainingOptions('sgdm');
Train the network.
rng('default')
net = trainNetwork(XTrain,YTrain,layers,options);
Training on single CPU. Initializing input data normalization. ========================================================================================  Epoch  Iteration  Time Elapsed  Minibatch  Minibatch  Base Learning     (hh:mm:ss)  Accuracy  Loss  Rate  ========================================================================================  1  1  00:00:00  10.16%  2.3195  0.0100   2  50  00:00:04  50.78%  1.7102  0.0100   3  100  00:00:09  63.28%  1.1632  0.0100   4  150  00:00:13  60.16%  1.0859  0.0100   6  200  00:00:16  68.75%  0.8996  0.0100   7  250  00:00:19  76.56%  0.7920  0.0100   8  300  00:00:22  73.44%  0.8411  0.0100   9  350  00:00:27  81.25%  0.5508  0.0100   11  400  00:00:30  90.62%  0.4744  0.0100   12  450  00:00:34  92.19%  0.3614  0.0100   13  500  00:00:38  94.53%  0.3160  0.0100   15  550  00:00:43  96.09%  0.2544  0.0100   16  600  00:00:46  92.19%  0.2765  0.0100   17  650  00:00:49  95.31%  0.2460  0.0100   18  700  00:00:54  99.22%  0.1418  0.0100   20  750  00:00:58  98.44%  0.1000  0.0100   21  800  00:01:03  98.44%  0.1449  0.0100   22  850  00:01:07  98.44%  0.0989  0.0100   24  900  00:01:10  96.88%  0.1315  0.0100   25  950  00:01:15  100.00%  0.0859  0.0100   26  1000  00:01:19  100.00%  0.0701  0.0100   27  1050  00:01:27  100.00%  0.0759  0.0100   29  1100  00:01:33  99.22%  0.0663  0.0100   30  1150  00:01:39  98.44%  0.0776  0.0100   30  1170  00:01:40  99.22%  0.0732  0.0100  ======================================================================================== Training finished: Max epochs completed.
Run the trained network on a test set and predict the scores.
[XTest,YTest] = digitTest4DArrayData; YPred = predict(net,XTest);
predict
, by default, uses a CUDA® enabled GPU with compute capability 3.0, when available. You can also choose to run predict
on a CPU using the 'ExecutionEnvironment','cpu'
namevalue pair argument.
Display the first 10 images in the test data and compare to the predictions from predict
.
YTest(1:10,:)
ans = 10x1 categorical
0
0
0
0
0
0
0
0
0
0
YPred(1:10,:)
ans = 10x10 single matrix
0.9978 0.0001 0.0008 0.0002 0.0003 0.0000 0.0004 0.0000 0.0002 0.0003
0.8881 0.0000 0.0474 0.0001 0.0000 0.0002 0.0029 0.0001 0.0014 0.0598
0.9998 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0001
0.9814 0.0000 0.0000 0.0000 0.0000 0.0000 0.0046 0.0000 0.0011 0.0129
0.9748 0.0000 0.0132 0.0003 0.0000 0.0000 0.0002 0.0004 0.0111 0.0001
0.9873 0.0000 0.0001 0.0000 0.0000 0.0000 0.0007 0.0000 0.0072 0.0047
0.9981 0.0000 0.0000 0.0000 0.0000 0.0000 0.0018 0.0000 0.0000 0.0000
1.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.9265 0.0000 0.0046 0.0000 0.0006 0.0009 0.0001 0.0000 0.0018 0.0655
0.9327 0.0000 0.0139 0.0012 0.0001 0.0001 0.0378 0.0000 0.0111 0.0031
YTest
contains the digits corresponding to the images in XTest
. The columns of YPred
contain predict
’s estimation of a probability that an image contains a particular digit. That is, the first column contains the probability estimate that the given image is digit 0, the second column contains the probability estimate that the image is digit 1, the third column contains the probability estimate that the image is digit 2, and so on. You can see that predict
’s estimation of probabilities for the correct digits are almost 1 and the probability for any other digit is almost 0. predict
correctly estimates the first 10 observations as digit 0.
Load pretrained network. JapaneseVowelsNet
is a pretrained LSTM network trained on the Japanese Vowels dataset as described in [1] and [2]. It was trained on the sequences sorted by sequence length with a minibatch size of 27.
load JapaneseVowelsNet
View the network architecture.
net.Layers
ans = 5x1 Layer array with layers: 1 'sequenceinput' Sequence Input Sequence input with 12 dimensions 2 'lstm' LSTM LSTM with 100 hidden units 3 'fc' Fully Connected 9 fully connected layer 4 'softmax' Softmax softmax 5 'classoutput' Classification Output crossentropyex with '1' and 8 other classes
Load the test data.
[XTest,YTest] = japaneseVowelsTestData;
Make predictions on the test data.
YPred = predict(net,XTest);
View the prediction scores for the first 10 sequences.
YPred(1:10,:)
ans = 10x9 single matrix
0.9918 0.0000 0.0000 0.0000 0.0006 0.0010 0.0001 0.0006 0.0059
0.9868 0.0000 0.0000 0.0000 0.0006 0.0010 0.0001 0.0010 0.0105
0.9924 0.0000 0.0000 0.0000 0.0006 0.0010 0.0001 0.0006 0.0054
0.9896 0.0000 0.0000 0.0000 0.0006 0.0009 0.0001 0.0007 0.0080
0.9965 0.0000 0.0000 0.0000 0.0007 0.0009 0.0000 0.0003 0.0016
0.9888 0.0000 0.0000 0.0000 0.0006 0.0010 0.0001 0.0008 0.0087
0.9886 0.0000 0.0000 0.0000 0.0006 0.0010 0.0001 0.0008 0.0089
0.9982 0.0000 0.0000 0.0000 0.0006 0.0007 0.0000 0.0001 0.0004
0.9883 0.0000 0.0000 0.0000 0.0006 0.0010 0.0001 0.0008 0.0093
0.9959 0.0000 0.0000 0.0000 0.0007 0.0011 0.0000 0.0004 0.0019
Compare these prediction scores to the labels of these sequences. The function assigns high prediction scores to the correct class.
YTest(1:10)
ans = 10x1 categorical
1
1
1
1
1
1
1
1
1
1
net
— Trained networkSeriesNetwork
object  DAGNetwork
objectTrained network, specified as a SeriesNetwork
or a DAGNetwork
object. You can get a trained network by importing a pretrained network (for example, by
using the googlenet
function) or by training your own network using
trainNetwork
.
imds
— Image datastoreImageDatastore
objectImage datastore, specified as an ImageDatastore
object.
ImageDatastore
allows batch reading of JPG or PNG image files using
prefetching. If you use a custom function for reading the images, then
ImageDatastore
does not prefetch.
Tip
Use augmentedImageDatastore
for efficient preprocessing of images for deep
learning including image resizing.
Do not use the readFcn
option of imageDatastore
for
preprocessing or resizing as this option is usually significantly slower.
ds
— DatastoreDatastore for outofmemory data and preprocessing. The datastore must return data in a table or a cell array. The format of the datastore output depends on the network architecture.
Network Architecture  Datastore Output  Example Output 

Single input  Table or cell array, where the first column specifies the predictors. Table elements must be scalars, row vectors, or 1by1 cell arrays containing a numeric array. Custom datastores must output tables. 
data = read(ds) data = 4×1 table Predictors __________________ {224×224×3 double} {224×224×3 double} {224×224×3 double} {224×224×3 double} 
data = read(ds) data = 4×1 cell array {224×224×3 double} {224×224×3 double} {224×224×3 double} {224×224×3 double}  
Multiple input  Cell array with at least The first The order of
inputs is given by the 
data = read(ds) data = 4×2 cell array {224×224×3 double} {128×128×3 double} {224×224×3 double} {128×128×3 double} {224×224×3 double} {128×128×3 double} {224×224×3 double} {128×128×3 double} 
The format of the predictors depend on the type of data.
Data  Format of Predictors 

2D image  hbywbyc numeric array, where h, w, and c are the height, width, and number of channels of the image, respectively. 
3D image  hbywbydbyc numeric array, where h, w, d, and c are the height, width, depth, and number of channels of the image, respectively. 
Vector sequence  cbys matrix, where c is the number of features of the sequence and s is the sequence length. 
1D image sequence  hbycbys array, where h and c correspond to the height and number of channels of the image, respectively, and s is the sequence length. Each sequence in the minibatch must have the same sequence length. 
2D image sequence  hbywbycbys array, where h, w, and c correspond to the height, width, and number of channels of the image, respectively, and s is the sequence length. Each sequence in the minibatch must have the same sequence length. 
3D image sequence  hbywbydbycbys array, where h, w, d, and c correspond to the height, width, depth, and number of channels of the image, respectively, and s is the sequence length. Each sequence in the minibatch must have the same sequence length. 
Features  cby1 column vector, where c is the number of features. 
For more information, see Datastores for Deep Learning.
X
— Image or feature dataImage or feature data, specified as a numeric array. The size of the array depends on the type of input:
Input  Description 

2D images  A hbywbycbyN numeric array, where h, w, and c are the height, width, and number of channels of the images, respectively, and N is the number of images. 
3D images  A hbywbydbycbyN numeric array, where h, w, d, and c are the height, width, depth, and number of channels of the images, respectively, and N is the number of images. 
Features  A NbynumFeatures numeric array,
where N is the number of observations and
numFeatures is the number of features of the input
data. 
If the array contains NaN
s, then they are propagated through
the network.
For networks with multiple inputs, you can specify multiple arrays
X1
, …, XN
, where N
is the
number of network inputs and the input Xi
corresponds to the network
input net.InputNames(i)
.
sequences
— Sequence or time series dataSequence or time series data, specified as an Nby1 cell array of numeric arrays, where N is the number of observations, a numeric array representing a single sequence, or a datastore.
For cell array or numeric array input, the dimensions of the numeric arrays containing the sequences depend on the type of data.
Input  Description 

Vector sequences  cbys matrices, where c is the number of features of the sequences and s is the sequence length. 
1D image sequences  hbycbys arrays, where h and c correspond to the height and number of channels of the images, respectively, and s is the sequence length. 
2D image sequences  hbywbycbys arrays, where h, w, and c correspond to the height, width, and number of channels of the images, respectively, and s is the sequence length. 
3D image sequences  hbywbydbycbys, where h, w, d, and c correspond to the height, width, depth, and number of channels of the 3D images, respectively, and s is the sequence length. 
For datastore input, the datastore must return data as a cell array of sequences or a table whose first column contains sequences. The dimensions of the sequence data must correspond to the table above.
tbl
— Table of image or feature datatable
Table of image or feature data. Each row in the table corresponds to an observation.
The arrangement of predictors in the table columns depend on the type of input data.
Input  Predictors 

Image data 
Specify predictors in a single column. 
Feature data  Numeric scalar. Specify
predictors in the first 
This argument supports networks with a single input only.
Data Types: table
'MiniBatchSize',256
specifies the minibatch size as
256.Specify optional commaseparated pair of Name,Value
argument.
Name
is the argument name and Value
is the
corresponding value. Name
must appear inside single quotes
(' '
).
MiniBatchSize
— Size of minibatchesSize of minibatches to use for prediction, specified as a positive integer. Larger minibatch sizes require more memory, but can lead to faster predictions.
When making predictions with sequences of different lengths, the minibatch size can impact the amount of padding added to the input data which can result in different predicted values. Try using different values to see which works best with your network. To specify minibatch size and padding options, use the 'MiniBatchSize'
and 'SequenceLength'
options, respectively.
Example: 'MiniBatchSize',256
Acceleration
— Performance optimization'auto'
(default)  'mex'
 'none'
Performance optimization, specified as the commaseparated pair consisting of 'Acceleration'
and one of the following:
'auto'
— Automatically apply a number of optimizations
suitable for the input network and hardware resources.
'mex'
— Compile and execute a MEX function. This option
is available when using a GPU only. Using a GPU requires Parallel Computing Toolbox and a supported GPU device. For information on supported
devices, see GPU Support by Release (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns
an error.
'none'
— Disable all acceleration.
The default option is 'auto'
. If 'auto'
is
specified, MATLAB^{®} will apply a number of compatible optimizations. If you use the
'auto'
option, MATLAB does not ever generate a MEX function.
Using the 'Acceleration'
options 'auto'
and
'mex'
can offer performance benefits, but at the expense of an
increased initial run time. Subsequent calls with compatible parameters are faster. Use
performance optimization when you plan to call the function multiple times using new
input data.
The 'mex'
option generates and executes a MEX function based on the network
and parameters used in the function call. You can have several MEX functions associated
with a single network at one time. Clearing the network variable also clears any MEX
functions associated with that network.
The 'mex'
option is only available when you are using a GPU. MEX
acceleration supports single GPU execution using the namevalue option
'ExecutionEvironment','gpu'
only.
To use the 'mex'
option, you must have a C/C++ compiler installed
and the GPU Coder™ Interface for Deep Learning Libraries support package. Install the support
package using the AddOn Explorer in MATLAB. For setup instructions, see MEX Setup (GPU Coder). GPU Coder is not required.
The 'mex'
option does not support all layers. For a list of
supported layers, see Supported Layers (GPU Coder). Only networks
with an imageInputLayer
are supported.
You cannot use MATLAB
Compiler™ to deploy your network when using the 'mex'
option.
Example: 'Acceleration','mex'
ExecutionEnvironment
— Hardware resource'auto'
(default)  'gpu'
 'cpu'
 'multigpu'
 'parallel'
Hardware resource, specified as the commaseparated pair consisting of
'ExecutionEnvironment'
and one of the following:
'auto'
— Use a GPU if one is available; otherwise, use the
CPU.
'gpu'
— Use the GPU. Using a GPU requires
Parallel Computing Toolbox and a supported GPU device. For information on supported devices, see GPU Support by Release (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an
error.
'cpu'
— Use the CPU.
'multigpu'
— Use multiple GPUs on one machine, using a
local parallel pool based on your default cluster profile. If there is no
current parallel pool, the software starts a parallel pool with pool size equal
to the number of available GPUs.
'parallel'
— Use a local or remote parallel pool based on
your default cluster profile. If there is no current parallel pool, the software
starts one using the default cluster profile. If the pool has access to GPUs,
then only workers with a unique GPU perform computation. If the pool does not
have GPUs, then computation takes place on all available CPU workers
instead.
For more information on when to use the different execution environments, see Scale Up Deep Learning in Parallel, on GPUs, and in the Cloud.
'gpu'
, 'multigpu'
, and
'parallel'
options require Parallel Computing Toolbox. To use a GPU for deep
learning, you must also have a supported GPU device. For information on supported devices, see
GPU Support by Release (Parallel Computing Toolbox). If you choose one of these options and Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an
error.
The 'multigpu'
and 'parallel'
options do not
support recurrent neural networks (RNNs) containing lstmLayer
,
bilstmLayer
,
or gruLayer
objects.
Example: 'ExecutionEnvironment','cpu'
ReturnCategorical
— Option to return categorical labelsfalse
(default)  true
Option to return categorical labels, specified as
true
or false
.
If ReturnCategorical
is true
,
then the function returns categorical labels for classification output
layers. Otherwise, the function returns the prediction scores for
classification output layers.
SequenceLength
— Option to pad, truncate, or split input sequences'longest'
(default)  'shortest'
 positive integerOption to pad, truncate, or split input sequences, specified as one of the following:
'longest'
— Pad sequences in each minibatch to have the same length as the longest sequence. This option does not discard any data, though padding can introduce noise to the network.
'shortest'
— Truncate sequences in each minibatch to have the same length as the shortest sequence. This option ensures that no padding is added, at the cost of discarding data.
Positive integer — For each minibatch, pad the sequences to the nearest multiple
of the specified length that is greater than the longest sequence length in the
minibatch, and then split the sequences into smaller sequences of the specified
length. If splitting occurs, then the software creates extra minibatches. Use this
option if the full sequences do not fit in memory. Alternatively, try reducing the
number of sequences per minibatch by setting the 'MiniBatchSize'
option to a lower value.
To learn more about the effect of padding, truncating, and splitting the input sequences, see Sequence Padding, Truncation, and Splitting.
Example: 'SequenceLength','shortest'
SequencePaddingDirection
— Direction of padding or truncation'right'
(default)  'left'
Direction of padding or truncation, specified as one of the following:
'right'
— Pad or truncate sequences on the right. The
sequences start at the same time step and the software truncates or adds
padding to the end of the sequences.
'left'
— Pad or truncate sequences on the left. The software truncates or adds padding to the start of the sequences so that the sequences end at the same time step.
Because LSTM layers process sequence data one time step at a time, when the layer OutputMode
property is 'last'
, any padding in the final time steps can negatively influence the layer output. To pad or truncate sequence data on the left, set the 'SequencePaddingDirection'
option to 'left'
.
For sequencetosequence networks (when the OutputMode
property is 'sequence'
for each LSTM layer), any padding in the first time steps can negatively influence the predictions for the earlier time steps. To pad or truncate sequence data on the right, set the 'SequencePaddingDirection'
option to 'right'
.
To learn more about the effect of padding, truncating, and splitting the input sequences, see Sequence Padding, Truncation, and Splitting.
SequencePaddingValue
— Value to pad input sequencesValue by which to pad input sequences, specified as a scalar. The option is valid only
when SequenceLength
is 'longest'
or a positive
integer. Do not pad sequences with NaN
, because doing so can
propagate errors throughout the network.
Example: 'SequencePaddingValue',1
YPred
— Predicted scores or responsesPredicted scores or responses, returned as a matrix, a 4D numeric array,
or a cell array of matrices. The format of YPred
depends on the type of problem.
The following table describes the format for classification problems.
Task  Format 

Image classification  NbyK matrix, where N is the number of observations, and K is the number of classes 
Sequencetolabel classification  
Feature classification  
Sequencetosequence classification  Nby1 cell array of matrices, where N is the
number of observations. The sequences are matrices
with K rows, where
K is the number of classes.
Each sequence has the same number of time steps as
the corresponding input sequence after applying
the 
The following table describes the format for regression problems.
Task  Format 

2D image regression 

3D image regression 

Sequencetoone regression  NbyR matrix, where N is the number of sequences and R is the number of responses. 
Sequencetosequence regression  Nby1 cell array of numeric
sequences, where N is the number of
sequences. The sequences are matrices with
R rows, where
R is the number of responses.
Each sequence has the same number of time steps as the
corresponding input sequence after applying the
For
sequencetosequence regression tasks with one
observation, 
Feature regression  NbyR matrix, where N is the number of observations and R is the number of responses. 
For sequencetosequence regression problems with one observation,
sequences
can be a matrix. In this case,
YPred
is a matrix of responses.
If the image data contains NaN
s, predict
propagates them through the network. If the network has ReLU
layers, these layers ignore NaN
s. However, if the network does not
have a ReLU layer, then predict
returns NaNs as
predictions.
When you train a network using the trainNetwork
function, or when you use prediction or validation functions
with DAGNetwork
and
SeriesNetwork
objects, the software performs these computations using singleprecision, floatingpoint
arithmetic. Functions for training, prediction, and validation include trainNetwork
, predict
,
classify
, and
activations
.
The software uses singleprecision arithmetic when you train networks using both CPUs and
GPUs.
You can compute the predicted scores and the predicted classes from a trained network
using classify
.
You can also compute the activations from a network layer using activations
.
For sequencetolabel and sequencetosequence classification networks (for example,
LSTM networks), you can make predictions and update the network state using classifyAndUpdateState
and predictAndUpdateState
.
[1] M. Kudo, J. Toyama, and M. Shimbo. "Multidimensional Curve Classification Using PassingThrough Regions." Pattern Recognition Letters. Vol. 20, No. 11–13, pages 1103–1111.
[2] UCI Machine Learning Repository: Japanese Vowels Dataset. https://archive.ics.uci.edu/ml/datasets/Japanese+Vowels
Usage notes and limitations:
C++ code generation supports the following syntaxes:
YPred = predict(net,X)
[YPred1,...,YPredM] =
predict(__)
YPred = predict(net,sequences)
__ = predict(__,Name,Value)
The input X
must not have a variable size. The
size must be fixed at code generation time.
For vector sequence inputs, the number of features must be a constant during code generation. The sequence length can be variable sized.
For image sequence inputs, the height, width, and the number of channels must be a constant during code generation.
Only the 'MiniBatchSize'
,
'ReturnCategorical'
,
'SequenceLength'
,
'SequencePaddingDirection'
, and
'SequencePaddingValue'
namevalue pair
arguments are supported for code generation. All namevalue pairs must
be compiletime constants.
Only the 'longest'
and
'shortest'
option of the
'SequenceLength'
namevalue pair is supported
for code generation.
If 'ReturnCategorical'
is set to
true
and you use a GCC C/C++ compiler version 8.2
or above, you might get a Wstringopoverflow
warning.
Code generation for Intel^{®} MKLDNN target does not support the combination of
'SequenceLength','longest'
,
'SequencePaddingDirection','left'
, and
'SequencePaddingValue',0
namevalue
arguments.
For more information about generating code for deep learning neural networks, see Workflow for Deep Learning Code Generation with MATLAB Coder (MATLAB Coder).
Usage notes and limitations:
GPU code generation supports the following syntaxes:
YPred = predict(net,X)
[YPred1,...,YPredM] =
predict(__)
YPred = predict(net,sequences)
__ = predict(__,Name,Value)
The input X
must not have variable size. The size
must be fixed at code generation time.
GPU code generation does not support gpuArray
inputs
to the predict
function.
The cuDNN library supports vector and 2D image sequences. The
TensorRT library support only vector input sequences. The ARM^{®}
Compute Library
for GPU does not support recurrent
networks.
For vector sequence inputs, the number of features must be a constant during code generation. The sequence length can be variable sized.
For image sequence inputs, the height, width, and the number of channels must be a constant during code generation.
Only the 'MiniBatchSize'
,
'ReturnCategorical'
,
'SequenceLength'
,
'SequencePaddingDirection'
, and
'SequencePaddingValue'
namevalue pair
arguments are supported for code generation. All namevalue pairs must
be compiletime constants.
Only the 'longest'
and
'shortest'
option of the
'SequenceLength'
namevalue pair is supported
for code generation.
GPU code generation for the predict
function
supports inputs that are defined as halfprecision floating point data
types. For more information, see half
(GPU Coder).
If 'ReturnCategorical'
is set to
true
and you use a GCC C/C++ compiler version 8.2
or above, you might get a Wstringopoverflow
warning.
To run computations in parallel, set the 'ExecutionEnvironment'
option to 'multigpu'
or 'parallel'
.
For details, see Scale Up Deep Learning in Parallel, on GPUs, and in the Cloud.
When input data is a gpuArray
, a cell array or table
containing gpuArray
data, or a datastore that returns
gpuArray
data,
"ExecutionEnvironment"
option must be
"auto"
or "gpu"
.
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
activations
 classify
 classifyAndUpdateState
 predictAndUpdateState
Tiene una versión modificada de este ejemplo. ¿Desea abrir este ejemplo con sus modificaciones?
Ha hecho clic en un enlace que corresponde a este comando de MATLAB:
Ejecute el comando introduciéndolo en la ventana de comandos de MATLAB. Los navegadores web no admiten comandos de MATLAB.
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
Select web siteYou can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.