Skip to content

Latest commit

 

History

History
65 lines (44 loc) · 5.67 KB

File metadata and controls

65 lines (44 loc) · 5.67 KB
graph LR
    Autograd_Engine["Autograd Engine"]
    Neural_Network_Modules["Neural Network Modules"]
    Neural_Network_Layers["Neural Network Layers"]
    Multi_Layer_Perceptron["Multi-Layer Perceptron"]
    Neural_Network_Layers -- "Uses" --> Autograd_Engine
    Neural_Network_Layers -- "Inherits from" --> Neural_Network_Modules
    Multi_Layer_Perceptron -- "Contains" --> Neural_Network_Layers
    Multi_Layer_Perceptron -- "Inherits from" --> Neural_Network_Modules
    click Autograd_Engine href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/micrograd/Autograd Engine.md" "Details"
    click Neural_Network_Modules href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/micrograd/Neural Network Modules.md" "Details"
    click Neural_Network_Layers href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/micrograd/Neural Network Layers.md" "Details"
    click Multi_Layer_Perceptron href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/micrograd/Multi-Layer Perceptron.md" "Details"
Loading

CodeBoardingDemoContact

Component Details

The micrograd library implements a simplified version of automatic differentiation and neural network building blocks. The core is the Value Tracking Engine, which enables the creation of expression graphs and automatic gradient computation. This engine is used to build neural network components like Neurons, Layers, and MLPs. The library provides a foundation for understanding the basic principles of deep learning without the complexity of larger frameworks.

Autograd Engine

The Autograd Engine is responsible for tracking scalar values and their gradients, enabling automatic differentiation. It overloads arithmetic operations to construct a computational graph, allowing for backpropagation to compute gradients. This component forms the foundation for building and training neural networks.

Related Classes/Methods:

Neural Network Modules

The Neural Network Modules component provides a base class for all neural network modules, offering a standardized interface. It includes a method for zeroing gradients, which is essential for training neural networks. Neuron, Layer, and MLP components inherit from this base class.

Related Classes/Methods:

Neural Network Layers

The Neural Network Layers component defines the structure and behavior of individual neurons and layers within a neural network. The Neuron class represents a single neuron with weights and a bias, using the Autograd Engine for calculations. The Layer class represents a collection of neurons, forming a layer in the network.

Related Classes/Methods:

Multi-Layer Perceptron

The Multi-Layer Perceptron (MLP) component represents a complete neural network, composed of multiple layers. It inherits from the Neural Network Modules base class and utilizes the Layer component to construct the network architecture. This component provides a high-level interface for creating and using neural networks.

Related Classes/Methods: