LlmInterimResponseConfig Class
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Configuration for LLM-based interim response generation. Uses LLM to generate context-aware interim responses when any trigger condition is met.
public class LlmInterimResponseConfig : Azure.AI.VoiceLive.InterimResponseConfigBase, System.ClientModel.Primitives.IJsonModel<Azure.AI.VoiceLive.LlmInterimResponseConfig>, System.ClientModel.Primitives.IPersistableModel<Azure.AI.VoiceLive.LlmInterimResponseConfig>
type LlmInterimResponseConfig = class
inherit InterimResponseConfigBase
interface IJsonModel<LlmInterimResponseConfig>
interface IPersistableModel<LlmInterimResponseConfig>
Public Class LlmInterimResponseConfig
Inherits InterimResponseConfigBase
Implements IJsonModel(Of LlmInterimResponseConfig), IPersistableModel(Of LlmInterimResponseConfig)
- Inheritance
- Implements
Constructors
| Name | Description |
|---|---|
| LlmInterimResponseConfig() |
Initializes a new instance of LlmInterimResponseConfig. |
Properties
| Name | Description |
|---|---|
| Instructions |
Custom instructions for generating interim responses. If not provided, a default prompt is used. |
| LatencyThresholdMs |
Latency threshold in milliseconds before triggering interim response. Default is 2000ms. (Inherited from InterimResponseConfigBase) |
| MaxCompletionTokens |
Maximum number of tokens to generate for the interim response. |
| Model |
The model to use for LLM-based interim response generation. Default is gpt-4.1-mini. |
| Triggers |
List of triggers that can fire the interim response. Any trigger can activate it (OR logic). Supported: 'latency', 'tool'. (Inherited from InterimResponseConfigBase) |