graph LR
Predictor["Predictor"]
Vector_Retrieve["Vector Retrieve"]
CSV_Predictor["CSV Predictor"]
Hive_Parquet_Predictor["Hive Parquet Predictor"]
Hive_Predictor["Hive Predictor"]
Predictor -- "delegates input/output handling to" --> CSV_Predictor
Predictor -- "delegates input/output handling to" --> Hive_Parquet_Predictor
Predictor -- "delegates input/output handling to" --> Hive_Predictor
Predictor -- "directs prediction results to" --> Vector_Retrieve
The Deployment & Serving subsystem is encapsulated within the easy_rec.python.inference package. This package is responsible for loading trained models, performing real-time or batch inference, and outputting predictions, aligning with the project's focus on an end-to-end recommendation framework.
Serves as the core inference engine. It orchestrates the entire prediction workflow, including loading the trained model, constructing the TensorFlow graph for inference, managing input/output tensors, executing the model, and handling prediction results. It acts as the primary interface for users to perform predictions.
Related Classes/Methods:
Specializes in post-processing model outputs into numerical vector representations. This is crucial for recommendation systems that rely on vector similarity search for candidate generation or retrieval tasks.
Related Classes/Methods:
Handles the specific logic for performing predictions when the input data is provided in CSV format. It manages CSV-specific data reading and parsing, and output writing.
Related Classes/Methods:
Manages the prediction workflow for input and output data stored in Hive tables using the Parquet format. This component is optimized for large-scale, columnar data storage prevalent in big data ecosystems.
Related Classes/Methods:
Provides the general prediction logic for interacting with Hive tables, handling various Hive table inputs and outputs beyond just Parquet.
Related Classes/Methods: