Factory & Cascade¶
Create an LLM instance for the specified provider and model.
This is the primary factory function for creating LLM instances. It handles provider-specific initialization and configuration lookup.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
provider
|
str
|
The LLM provider name. One of: "openai", "anthropic", "gemini", "deepseek", "cohere". |
required |
model
|
str
|
The model identifier (e.g., "gpt-4o", "claude-sonnet-4-20250514"). |
required |
Returns:
| Type | Description |
|---|---|
LLM
|
An LLM instance configured for the specified provider and model. |
Raises:
| Type | Description |
|---|---|
ConfigurationError
|
If the provider or model is not recognized. |
Example
llm = get_llm_instance("anthropic", "claude-sonnet-4-20250514") response = await llm.get_response("Hello!")
Source code in src/majordomo_llm/factory.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 | |
Create LLM instances for all configured providers and models.
Yields LLM instances one at a time, which is useful for initialization or testing all available models.
Yields:
| Type | Description |
|---|---|
LLM
|
LLM instances for each configured provider/model combination. |
Example
for llm in get_all_llm_instances(): ... print(llm.get_full_model_name())
Source code in src/majordomo_llm/factory.py
104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | |
Bases: LLM
LLM wrapper that tries multiple providers in priority order.
When a provider fails with a ProviderError, the next provider in the cascade is tried. This provides automatic failover for resilience.
The providers list defines priority order - first provider is tried first.
Attributes:
| Name | Type | Description |
|---|---|---|
llms |
List of LLM instances in priority order. |
Example
cascade = LLMCascade([ ... ("anthropic", "claude-sonnet-4-20250514"), # Primary ... ("openai", "gpt-4o"), # First fallback ... ("gemini", "gemini-2.5-flash"), # Last resort ... ]) response = await cascade.get_response("Hello!")
Source code in src/majordomo_llm/cascade.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 | |
__init__ ¶
__init__(providers)
Initialize the cascade with a list of providers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
providers
|
list[tuple[str, str]]
|
List of (provider, model) tuples in priority order. First provider is tried first. |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If providers list is empty. |
Source code in src/majordomo_llm/cascade.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | |
get_json_response
async
¶
get_json_response(
user_prompt,
system_prompt=None,
temperature=0.3,
top_p=1.0,
)
Get a JSON response, falling back to next provider on failure.
Source code in src/majordomo_llm/cascade.py
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 | |
get_response
async
¶
get_response(
user_prompt,
system_prompt=None,
temperature=0.3,
top_p=1.0,
)
Get a response, falling back to next provider on failure.
Source code in src/majordomo_llm/cascade.py
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | |