graph LR
Embedding_Layer_Manager["Embedding Layer Manager"]
Model_Graph_Builder["Model Graph Builder"]
Transformer_Layer["Transformer Layer"]
ALBERT_Model_Integrator["ALBERT Model Integrator"]
Capsule_Layer["Capsule Layer"]
Attention_Layers["Attention Layers"]
Pooling_Feature_Layers["Pooling & Feature Layers"]
Custom_Optimizers["Custom Optimizers"]
Model_Graph_Builder -- "depends on" --> Embedding_Layer_Manager
Model_Graph_Builder -- "composes" --> Transformer_Layer
Model_Graph_Builder -- "composes" --> ALBERT_Model_Integrator
Model_Graph_Builder -- "composes" --> Capsule_Layer
Model_Graph_Builder -- "composes" --> Attention_Layers
Model_Graph_Builder -- "composes" --> Pooling_Feature_Layers
Model_Graph_Builder -- "configures and applies" --> Custom_Optimizers
The Model Building Blocks subsystem provides the foundational elements for constructing various text classification models within the Keras framework. It encapsulates core utilities for data representation, graph definition, and a rich set of specialized custom layers and optimizers.
Manages the creation and configuration of embedding layers, converting raw text into numerical representations suitable for model input. It handles the initial data preparation for the neural network.
Related Classes/Methods:
Serves as the central orchestrator for high-level model building, compilation, and training processes. It defines the overall structure of Keras models by integrating various custom layers and pre-trained model components.
Related Classes/Methods:
Implements the core logic for Transformer encoder and decoder stacks, including multi-head attention mechanisms. It provides a reusable and configurable Transformer architecture.
Related Classes/Methods:
keras_textclassification.keras_layers.transformer:1-100keras_textclassification.keras_layers.transformer_utils.multi_head_attention:1-100
Facilitates the integration of ALBERT models into the Keras framework, including managing the construction and loading of pre-trained weights.
Related Classes/Methods:
Implements the unique logic of a Capsule layer, including its squash activation function, offering an alternative to traditional convolutional layers for hierarchical feature learning.
Related Classes/Methods:
Provide various attention mechanisms, such as self-attention and dot-product attention, which are crucial for capturing dependencies within sequences.
Related Classes/Methods:
keras_textclassification.keras_layers.attention_self:1-100keras_textclassification.keras_layers.attention_dot:1-100
Offer specialized operations for feature selection (K-Max Pooling), non-linear transformations (Highway networks), and utilities for handling masks in recurrent or attention-based networks.
Related Classes/Methods:
keras_textclassification.keras_layers.k_max_pooling:1-100keras_textclassification.keras_layers.highway:1-100keras_textclassification.keras_layers.non_mask_layer:1-100
Provide advanced optimization algorithms (e.g., Lookahead, RAdam) that can improve the training stability and performance of Keras models.
Related Classes/Methods: