graph LR
TVAE_Main_Model_Interface_["TVAE (Main Model Interface)"]
Training_Orchestrator["Training Orchestrator"]
Data_Encoder["Data Encoder"]
Data_Decoder["Data Decoder"]
Loss_Calculator["Loss Calculator"]
TVAE_Main_Model_Interface_ -- "orchestrates" --> Training_Orchestrator
Training_Orchestrator -- "calls" --> Data_Encoder
Training_Orchestrator -- "directs" --> Data_Decoder
Training_Orchestrator -- "requests" --> Loss_Calculator
Loss_Calculator -- "provides loss to" --> Training_Orchestrator
Data_Encoder -- "provides output to" --> Loss_Calculator
Data_Decoder -- "provides output to" --> Loss_Calculator
The TVAE Model subsystem is a core component of the synthetic data generation library, specifically implementing the Tabular Variational Autoencoder algorithm. It adheres to the "ML Toolkit/Library" and "Pipeline/Workflow" architectural patterns, focusing on modularity for model implementation and training logic.
Serves as the primary public interface for the Tabular Variational Autoencoder. It orchestrates the overall training and sampling processes, initializing the Encoder and Decoder networks and managing the model's lifecycle.
Related Classes/Methods:
Manages the iterative training loop for the TVAE model. It coordinates the data flow through the Data Encoder and Data Decoder, and utilizes the Loss Calculator to compute gradients and optimize the model's parameters.
Related Classes/Methods:
Implements the neural network responsible for mapping high-dimensional input data into a lower-dimensional latent space. It learns to extract and represent the essential features of the input data.
Related Classes/Methods:
Implements the neural network responsible for reconstructing data from the latent space back into the original data space. Its goal is to generate synthetic data that closely resembles the distribution of the original data.
Related Classes/Methods:
Computes the combined loss function for the TVAE, which typically includes a reconstruction loss (e.g., Mean Squared Error or Binary Cross-Entropy) and a Kullback-Leibler (KL) divergence term. This loss guides the model's learning process.
Related Classes/Methods: