graph LR
Trainer["Trainer"]
Pipeline["Pipeline"]
Optimizers["Optimizers"]
Schedulers["Schedulers"]
Writer["Writer"]
Trainer -- "calls" --> Pipeline
Trainer -- "interacts with" --> Optimizers
Trainer -- "manages" --> Schedulers
Trainer -- "utilizes" --> Writer
Pipeline -- "provides parameters/gradients to" --> Optimizers
This subsystem is the core of the nerfstudio project's machine learning pipeline, responsible for orchestrating the entire training and evaluation workflow. It connects data handling with the neural rendering engine, manages the optimization process, and handles persistent state management through checkpointing and logging.
The central orchestrator of the training and evaluation loop. It manages the overall iteration control, handles checkpointing, and updates the viewer state. It drives the training process by initiating steps within the Pipeline.
Related Classes/Methods:
Encapsulates the logic for a single forward and backward pass (training step) or a single forward pass (evaluation step). It is responsible for computing losses and metrics based on the model's output and providing gradients for optimization.
Related Classes/Methods:
Manages and applies the chosen optimization algorithms (e.g., Adam, SGD) to update the parameters of the neural rendering Model based on the gradients computed by the Pipeline.
Related Classes/Methods:
Adjusts the learning rates of the Optimizers over time according to a predefined schedule (e.g., exponential decay, cosine annealing). This component helps in fine-tuning the training process for better convergence.
Related Classes/Methods:
Provides functionalities for logging scalar metrics, time-based events, and managing event writers (e.g., Tensorboard, Weights & Biases). It is also responsible for saving and loading model checkpoints, ensuring training progress can be resumed.
Related Classes/Methods: