Class AlphaDropout
- java.lang.Object
-
- org.deeplearning4j.nn.conf.dropout.AlphaDropout
-
- All Implemented Interfaces:
Serializable,Cloneable,IDropout
public class AlphaDropout extends Object implements IDropout
- See Also:
- Serialized Form
-
-
Field Summary
Fields Modifier and Type Field Description static doubleDEFAULT_ALPHAstatic doubleDEFAULT_LAMBDA
-
Constructor Summary
Constructors Modifier Constructor Description AlphaDropout(double activationRetainProbability)protectedAlphaDropout(double activationRetainProbability, ISchedule activationRetainProbabilitySchedule, double alpha, double lambda)AlphaDropout(@NonNull ISchedule activationRetainProbabilitySchedule)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description doublea(double p)INDArrayapplyDropout(INDArray inputActivations, INDArray output, int iteration, int epoch, LayerWorkspaceMgr workspaceMgr)doubleb(double p)INDArraybackprop(INDArray gradAtOutput, INDArray gradAtInput, int iteration, int epoch)Perform backprop.voidclear()Clear the internal state (for example, dropout mask) if any is presentAlphaDropoutclone()
-
-
-
Field Detail
-
DEFAULT_ALPHA
public static final double DEFAULT_ALPHA
- See Also:
- Constant Field Values
-
DEFAULT_LAMBDA
public static final double DEFAULT_LAMBDA
- See Also:
- Constant Field Values
-
-
Constructor Detail
-
AlphaDropout
public AlphaDropout(double activationRetainProbability)
- Parameters:
activationRetainProbability- Probability of retaining an activation. SeeAlphaDropoutjavadoc
-
AlphaDropout
public AlphaDropout(@NonNull @NonNull ISchedule activationRetainProbabilitySchedule)- Parameters:
activationRetainProbabilitySchedule- Schedule for the probability of retaining an activation. SeeAlphaDropoutjavadoc
-
AlphaDropout
protected AlphaDropout(double activationRetainProbability, ISchedule activationRetainProbabilitySchedule, double alpha, double lambda)
-
-
Method Detail
-
applyDropout
public INDArray applyDropout(INDArray inputActivations, INDArray output, int iteration, int epoch, LayerWorkspaceMgr workspaceMgr)
- Specified by:
applyDropoutin interfaceIDropout- Parameters:
inputActivations- Input activations arrayoutput- The result array (same as inputArray for in-place ops) for the post-dropout activationsiteration- Current iteration numberepoch- Current epoch numberworkspaceMgr- Workspace manager, if any storage is required (use ArrayType.INPUT)- Returns:
- The output (resultArray) after applying dropout
-
backprop
public INDArray backprop(INDArray gradAtOutput, INDArray gradAtInput, int iteration, int epoch)
Description copied from interface:IDropoutPerform backprop. This should also clear the internal state (dropout mask) if any is present- Specified by:
backpropin interfaceIDropout- Parameters:
gradAtOutput- Gradients at the output of the dropout op - i.e., dL/dOutgradAtInput- Gradients at the input of the dropout op - i.e., dL/dIn. Use the same array as gradAtOutput to apply the backprop gradient in-placeiteration- Current iterationepoch- Current epoch- Returns:
- Same array as gradAtInput - i.e., gradient after backpropagating through dropout op - i.e., dL/dIn
-
clear
public void clear()
Description copied from interface:IDropoutClear the internal state (for example, dropout mask) if any is present
-
clone
public AlphaDropout clone()
-
a
public double a(double p)
-
b
public double b(double p)
-
-