ChatCompletion implements Pipeline
Implements an OpenAI ChatCompletion pipeline using the GPT-3.5-Turbo model as its default.
This pipeline uses an underlying driver (default: GuzzleDriver) to communicate with the OpenAI API. It expects a SUAprompt instance to set the conversation messages, including system instructions, user inputs, and assistant responses. The output methods provide access to the response as text, JSON, or an array, and streaming methods are also supported.
Table of Contents
Interfaces
- Pipeline
- Defines the required methods for Ai Model Pipelines, which are required for compatibility with LaravelNeuro Agent Networking features.
Properties
- $accessToken : mixed
- The OpenAI access token retrieved from configuration.
- $driver : GuzzleDriver
- The driver instance used for HTTP communication.
- $model : mixed
- The model identifier used for the API request.
- $prompt : mixed
- The prompt data, structured as an array of messages.
Methods
- __construct() : mixed
- ChatCompletion constructor.
- array() : array<string|int, mixed>
- Retrieves the API response as an associative array.
- driver() : Driver
- Accesses the injected Driver instance.
- driverClass() : string
- Retrieves the class name of the default associated driver.
- getDriver() : Driver
- Retrieves the current driver.
- getModel() : mixed
- Retrieves the current model identifier.
- getPrompt() : mixed
- Retrieves the current prompt.
- json() : string
- Retrieves the API response as a JSON-formatted string.
- output() : string
- Executes the API request and returns the generated text output.
- promptClass() : string
- Retrieves the class name of the default associated prompt.
- setModel() : self
- Sets the model for the pipeline.
- setPrompt() : self
- Sets the prompt for the pipeline.
- stream() : Generator
- Executes a streaming API request and yields output chunks.
- streamArray() : Generator
- Executes a streaming API request and yields the output as decoded arrays.
- streamJson() : Generator
- Executes a streaming API request and yields JSON-encoded output chunks.
- streamText() : Generator
- Executes a streaming API request and yields text output chunks.
- text() : string
- Retrieves the text output from the API response.
Properties
$accessToken
The OpenAI access token retrieved from configuration.
protected
mixed
$accessToken
$driver
The driver instance used for HTTP communication.
protected
GuzzleDriver
$driver
$model
The model identifier used for the API request.
protected
mixed
$model
$prompt
The prompt data, structured as an array of messages.
protected
mixed
$prompt
Methods
__construct()
ChatCompletion constructor.
public
__construct([Driver $driver = new GuzzleDriver() ]) : mixed
Retrieves configuration values for the GPT-3.5-Turbo model and API endpoint, initializes the driver (defaulting to GuzzleDriver), sets required HTTP headers, and validates that all required configuration values are present.
Parameters
- $driver : Driver = new GuzzleDriver()
-
An instance implementing the Driver contract.
Tags
array()
Retrieves the API response as an associative array.
public
array() : array<string|int, mixed>
Return values
array<string|int, mixed> —The decoded API response.
driver()
Accesses the injected Driver instance.
public
driver() : Driver
Return values
Driver —the Driver instance stored in this instance of the class.
driverClass()
Retrieves the class name of the default associated driver.
public
driverClass() : string
Return values
stringgetDriver()
Retrieves the current driver.
public
getDriver() : Driver
Return values
Driver —The driver instance.
getModel()
Retrieves the current model identifier.
public
getModel() : mixed
Return values
mixed —The model identifier.
getPrompt()
Retrieves the current prompt.
public
getPrompt() : mixed
Return values
mixed —The prompt data.
json()
Retrieves the API response as a JSON-formatted string.
public
json() : string
Return values
string —The JSON-encoded API response.
output()
Executes the API request and returns the generated text output.
public
output() : string
This method is an alias for text().
Return values
string —The generated text output.
promptClass()
Retrieves the class name of the default associated prompt.
public
promptClass() : string
Return values
stringsetModel()
Sets the model for the pipeline.
public
setModel(mixed $model) : self
Updates both the pipeline and the underlying driver's model.
Parameters
- $model : mixed
-
The model identifier.
Return values
selfsetPrompt()
Sets the prompt for the pipeline.
public
setPrompt(SUAprompt $prompt) : self
Expects a SUAprompt instance. Iterates through the prompt elements to build a structured messages array where:
- "role" elements are transformed into system messages.
- "agent" elements become assistant messages.
- "user" elements become user messages. The resulting messages array is then passed to the driver using the "messages" key.
Parameters
- $prompt : SUAprompt
-
A SUAprompt instance.
Tags
Return values
selfstream()
Executes a streaming API request and yields output chunks.
public
stream() : Generator
Modifies the request to enable streaming mode and yields the raw stream output.
Return values
Generator —Yields the streaming output.
streamArray()
Executes a streaming API request and yields the output as decoded arrays.
public
streamArray() : Generator
Return values
Generator —Yields decoded output as an array.
streamJson()
Executes a streaming API request and yields JSON-encoded output chunks.
public
streamJson() : Generator
Return values
Generator —Yields JSON-formatted output chunks.
streamText()
Executes a streaming API request and yields text output chunks.
public
streamText() : Generator
Iterates over the streaming output and yields the text content from delta messages.
Return values
Generator —Yields text output chunks.
text()
Retrieves the text output from the API response.
public
text() : string
Extracts and returns the content from the first choice in the response.
Return values
string —The text content from the API response.