orchardcore-ai
Scannednpx machina-cli add skill CrestApps/CrestApps.AgentSkills/orchardcore-ai --openclawOrchard Core AI - Prompt Templates
Configure AI Integration
You are an Orchard Core expert. Generate code and configuration for AI integrations in Orchard Core.
Guidelines
- Orchard Core supports AI integrations through the CrestApps AI module ecosystem.
- Supported AI providers: OpenAI, Azure, AzureAIInference, and Ollama.
- Configure AI services through
appsettings.jsonor the admin UI. - Use dependency injection to access AI services in modules.
- Always secure API keys using user secrets or environment variables, never hardcode them.
- AI profiles define how the AI system interacts with users, including system messages and response behavior.
- Profile types include
Chat,Utility, andTemplatePrompt.
Enabling AI Features
{
"steps": [
{
"name": "Feature",
"enable": [
"CrestApps.OrchardCore.AI"
],
"disable": []
}
]
}
AI Configuration in appsettings.json
{
"OrchardCore": {
"CrestApps_AI": {
"DefaultParameters": {
"Temperature": 0,
"MaxOutputTokens": 800,
"TopP": 1,
"FrequencyPenalty": 0,
"PresencePenalty": 0,
"PastMessagesCount": 10,
"MaximumIterationsPerRequest": 1,
"EnableOpenTelemetry": false,
"EnableDistributedCaching": true
},
"Providers": {
"OpenAI": {
"DefaultConnectionName": "default",
"DefaultDeploymentName": "gpt-4o",
"DefaultIntentDeploymentName": "gpt-4o-mini",
"DefaultEmbeddingDeploymentName": "",
"DefaultImagesDeploymentName": "",
"Connections": {
"default": {
"DefaultDeploymentName": "gpt-4o",
"DefaultIntentDeploymentName": "gpt-4o-mini"
}
}
}
}
}
}
}
Deployment Name Settings
| Setting | Description | Required |
|---|---|---|
DefaultDeploymentName | The default model for chat completions | Yes |
DefaultEmbeddingDeploymentName | The model for generating embeddings (for RAG/vector search) | No |
DefaultIntentDeploymentName | A lightweight model for intent classification (e.g., gpt-4o-mini) | No |
DefaultImagesDeploymentName | The model for image generation (e.g., dall-e-3) | No |
Adding AI Provider Connection via Recipe
{
"steps": [
{
"name": "AIProviderConnections",
"connections": [
{
"Source": "OpenAI",
"Name": "default",
"IsDefault": true,
"DefaultDeploymentName": "gpt-4o",
"DisplayText": "OpenAI",
"Properties": {
"OpenAIConnectionMetadata": {
"Endpoint": "https://api.openai.com/v1",
"ApiKey": "{{YourApiKey}}"
}
}
}
]
}
]
}
Creating AI Profiles via Recipe
{
"steps": [
{
"name": "AIProfile",
"profiles": [
{
"Source": "OpenAI",
"Name": "{{ProfileName}}",
"DisplayText": "{{DisplayName}}",
"WelcomeMessage": "{{WelcomeMessage}}",
"FunctionNames": [],
"Type": "Chat",
"TitleType": "InitialPrompt",
"PromptTemplate": null,
"ConnectionName": "",
"DeploymentId": "",
"Properties": {
"AIProfileMetadata": {
"SystemMessage": "{{SystemMessage}}",
"Temperature": null,
"TopP": null,
"FrequencyPenalty": null,
"PresencePenalty": null,
"MaxTokens": null,
"PastMessagesCount": null
}
}
}
]
}
]
}
Defining Chat Profiles Using Code
public sealed class SystemDefinedAIProfileMigrations : DataMigration
{
private readonly IAIProfileManager _profileManager;
public SystemDefinedAIProfileMigrations(IAIProfileManager profileManager)
{
_profileManager = profileManager;
}
public async Task<int> CreateAsync()
{
var profile = await _profileManager.NewAsync("OpenAI");
profile.Name = "UniqueTechnicalName";
profile.DisplayText = "A Display name for the profile";
profile.Type = AIProfileType.Chat;
profile.WithSettings(new AIProfileSettings
{
LockSystemMessage = true,
IsRemovable = false,
IsListable = false,
});
profile.WithSettings(new AIChatProfileSettings
{
IsOnAdminMenu = true,
});
profile.Put(new AIProfileMetadata
{
SystemMessage = "some system message",
Temperature = 0.3f,
MaxTokens = 4096,
});
await _profileManager.SaveAsync(profile);
return 1;
}
}
Extending AI Chat with Custom Functions
public sealed class GetWeatherFunction : AIFunction
{
public const string TheName = "get_weather";
private static readonly JsonElement _jsonSchema = JsonSerializer.Deserialize<JsonElement>(
"""
{
"type": "object",
"properties": {
"Location": {
"type": "string",
"description": "The geographic location for which the weather information is requested."
}
},
"additionalProperties": false,
"required": ["Location"]
}
""");
public override string Name => TheName;
public override string Description => "Retrieves weather information for a specified location.";
public override JsonElement JsonSchema => _jsonSchema;
protected override ValueTask<object> InvokeCoreAsync(AIFunctionArguments arguments, CancellationToken cancellationToken)
{
if (!arguments.TryGetValue("Location", out var prompt) || prompt is null)
{
return ValueTask.FromResult<object>("Location is required.");
}
var location = prompt is JsonElement jsonElement
? jsonElement.GetString()
: prompt?.ToString();
var weather = Random.Shared.NextDouble() > 0.5
? $"It's sunny in {location}."
: $"It's raining in {location}.";
return ValueTask.FromResult<object>(weather);
}
}
Registering Custom AI Tools
services.AddAITool<GetWeatherFunction>(GetWeatherFunction.TheName);
Or with configuration options:
services.AddAITool<GetWeatherFunction>(GetWeatherFunction.TheName, options =>
{
options.Title = "Weather Getter";
options.Description = "Retrieves weather information for a specified location.";
options.Category = "Service";
});
Security Best Practices
- Store API keys in user secrets during development:
dotnet user-secrets set "OrchardCore:CrestApps_AI:Providers:OpenAI:Connections:default:ApiKey" "your-key" - Use environment variables in production.
- Apply appropriate permissions to restrict AI feature access.
- Monitor token usage and set rate limits for production deployments.
Source
git clone https://github.com/CrestApps/CrestApps.AgentSkills/blob/main/src/CrestApps.AgentSkills/orchardcore/orchardcore-ai/SKILL.mdView on GitHub Overview
Orchard Core AI enables configuring AI integrations within Orchard Core using CrestApps modules. It covers registering AI services, enabling features, and wiring providers like OpenAI, Azure, AzureAIInference, and Ollama. It also defines AI profiles (Chat, Utility, TemplatePrompt) and supports appsettings.json or admin UI configuration with dependency injection and secure keys.
How This Skill Works
Configure AI services through appsettings.json or the admin UI, then use DI to access AI services in modules. Define AI profiles with system messages and behavior, and assign deployment names and provider connections via recipes or UI. Enabling features is done by toggling CrestApps.OrchardCore.AI in a feature step.
When to Use It
- When integrating an AI provider into an Orchard Core app using CrestApps AI modules (OpenAI, Azure, AzureAIInference, Ollama).
- When you need to set default AI behavior and limits via DefaultParameters in appsettings.json.
- When you want to add AI provider connections using a Recipe with AIProviderConnections.
- When you need AI profiles (Chat, Utility, TemplatePrompt) with specific system messages and deployment settings.
- When you must secure API keys and avoid hardcoding by using user secrets or environment variables.
Quick Start
- Step 1: Enable the CrestApps AI feature by adding a feature step for CrestApps.OrchardCore.AI.
- Step 2: Configure appsettings.json under OrchardCore.CrestApps_AI with DefaultParameters and OpenAI provider settings.
- Step 3: Create an AIProviderConnections recipe to connect OpenAI as the default provider, then create an AIProfile recipe for a chat profile.
Best Practices
- Secure API keys using user secrets or environment variables; never hardcode them.
- Use dependency injection to access AI services from Orchard Core modules.
- Define deployment names to easily switch between models without code changes.
- Validate AI profiles for SystemMessage and parameter usage before going live.
- Test across multiple providers (OpenAI, Azure, Ollama) to pick the best fit for your scenario.
Example Use Cases
- Enable the CrestApps AI feature in Orchard Core using a feature step.
- Configure appsettings.json with DefaultParameters and an OpenAI provider block.
- Add an OpenAI provider connection via a Recipe step named AIProviderConnections.
- Create an AI profile via a Recipe with Type set to Chat and a SystemMessage.
- Use DefaultDeploymentName and other DeploymentName settings to control which model is used for chat and embeddings.