graph LR
Client_Applications["Client Applications"]
Transport_Layer["Transport Layer"]
Pipecat_Core_Engine["Pipecat Core Engine"]
Pipeline_Processing_Modules["Pipeline Processing Modules"]
External_AI_Service_Adapters["External AI Service Adapters"]
Context_Memory_Management["Context & Memory Management"]
Application_Runner["Application Runner"]
Client_Applications -- "sends data to" --> Transport_Layer
Transport_Layer -- "feeds frames to" --> Pipecat_Core_Engine
Pipecat_Core_Engine -- "routes frames to" --> Pipeline_Processing_Modules
Pipeline_Processing_Modules -- "sends data to" --> External_AI_Service_Adapters
External_AI_Service_Adapters -- "interacts with" --> Context_Memory_Management
Context_Memory_Management -- "provides context to" --> External_AI_Service_Adapters
External_AI_Service_Adapters -- "returns data to" --> Pipeline_Processing_Modules
Pipeline_Processing_Modules -- "returns frames to" --> Pipecat_Core_Engine
Pipecat_Core_Engine -- "feeds frames to" --> Transport_Layer
Transport_Layer -- "sends data to" --> Client_Applications
Application_Runner -- "configures and launches" --> Pipecat_Core_Engine
click Client_Applications href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/pipecat/Client_Applications.md" "Details"
click Transport_Layer href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/pipecat/Transport_Layer.md" "Details"
click Pipecat_Core_Engine href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/pipecat/Pipecat_Core_Engine.md" "Details"
click Pipeline_Processing_Modules href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/pipecat/Pipeline_Processing_Modules.md" "Details"
click External_AI_Service_Adapters href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/pipecat/External_AI_Service_Adapters.md" "Details"
click Context_Memory_Management href "https://github.com/CodeBoarding/GeneratedOnBoardings/blob/main/pipecat/Context_Memory_Management.md" "Details"
The Pipecat system is designed around a modular, pipeline-driven architecture for conversational AI. Client Applications initiate interactions, sending data to the Transport Layer, which handles real-time communication and protocol translation. The Transport Layer feeds these data frames into the Pipecat Core Engine, the central orchestrator responsible for defining and executing the AI pipeline. Within this pipeline, data flows through various Pipeline Processing Modules for transformations, filtering, and specialized audio handling. These modules interact with External AI Service Adapters, which integrate with diverse AI providers (STT, LLM, TTS, Multimodal) to perform core AI tasks. Conversational state and historical data are managed by the Context & Memory Management component, which provides essential context to the External AI Service Adapters. Processed data and AI responses are then returned through the Pipeline Processing Modules to the Pipecat Core Engine, which routes them back to the Transport Layer for delivery to the Client Applications. The entire system is configured and launched by the Application Runner, ensuring a cohesive and efficient conversational AI experience.
Client Applications [Expand]
External user interfaces (web, mobile, custom clients) that interact with the Pipecat system, primarily through WebRTC or WebSocket connections.
Related Classes/Methods:
pipecat.transports.network.small_webrtc.SmallWebRTCTransport:746-842pipecat.transports.network.websocket_server.WebsocketServer
Transport Layer [Expand]
Manages bidirectional, real-time communication between clients and the core engine, handling protocol conversions and data frame transmission.
Related Classes/Methods:
pipecat.transports.base_input.BaseInput:56-513pipecat.transports.base_output.BaseOutput:55-863pipecat.transports.network.websocket_server.WebsocketServerpipecat.transports.network.webrtc_connection.WebRtcConnection
Pipecat Core Engine [Expand]
The central orchestrator defining and executing the conversational AI pipeline, routing data frames between various processing modules and services.
Related Classes/Methods:
pipecat.pipeline.runner.PipelineRunner:26-124pipecat.pipeline.pipeline.Pipeline:87-181pipecat.pipeline.task.PipelineTask:168-834
Pipeline Processing Modules [Expand]
Internal components performing data transformations, filtering, aggregation, and specialized audio processing within the pipeline.
Related Classes/Methods:
pipecat.audio.vad.vad_analyzer.VADAnalyzer:61-232pipecat.processors.transcript_processor.TranscriptProcessor:222-325pipecat.processors.audio.audio_buffer_processor.AudioBufferProcessor:33-331pipecat.processors.aggregators.dtmf_aggregator.DTMFAggregator:31-164
External AI Service Adapters [Expand]
Pluggable modules integrating with various external AI providers (STT, LLM, TTS, Multimodal) to perform core AI tasks.
Related Classes/Methods:
pipecat.services.stt_service.STTService:30-188pipecat.services.llm_service.LLMService:136-622pipecat.services.tts_service.TTSService:47-442pipecat.services.vision_service.VisionService:22-73
Context & Memory Management [Expand]
Modules responsible for maintaining conversational context and managing short/long-term memory for LLM interactions.
Related Classes/Methods:
pipecat.processors.aggregators.openai_llm_context.OpenAILLMContext:59-348pipecat.services.mem0.memory.Mem0Memory:36-259
Configures and launches the Pipecat Core Engine and its associated components, managing the overall application lifecycle.
Related Classes/Methods: