Integrating AI into .NET Applications with Semantic Kernel
Did you know that integrating artificial intelligence into your applications can boost user experience and engagement dramatically? By using the Semantic Kernel SDK, C# developers can seamlessly connect to large language models like OpenAI’s GPT with minimal configuration and a modular plugin approach.
What is Semantic Kernel?
Semantic Kernel is an open-source AI toolkit that serves as a bridge between your C# codebase and powerful language models. It introduces a lightweight abstraction layer—called the kernel—that manages prompt orchestration, service configuration, and plugin invocation. With Semantic Kernel, you define plugins (either prompt-based or native code) that encapsulate discrete AI tasks. The kernel then handles calling these plugins in sequence, passing inputs and outputs, and integrating external services without boilerplate code. Whether you need to summarize documents, generate creative content, or interact with specialized APIs, Semantic Kernel makes integration predictable and maintainable.
Overview of the Tutorial
In this tutorial, we’ll build a command-line application named Factman that:
- Discovers a common myth about AI.
- BUST the myth with factual information.
- ADAPT the response into a social media-ready message.
- SIMULATE posting to a social platform.
We will cover the entire process: from setting up a new .NET console app, installing the kernel SDK, defining prompt and native function plugins, to assembling and executing everything in a clear, step-by-step flow. By the end, you’ll have hands-on experience orchestrating multiple AI calls, handling input variables, and leveraging C# for custom logic.
Setting Up Your .NET Project
First, ensure you have the .NET 6+ CLI installed. You can verify by running dotnet --version
in your terminal. Once confirmed, follow these steps to scaffold a fresh project:
Creating Your C# Console App
- Create a new console project:
dotnet new console -n SKDemo
- Navigate into the project directory:
cd SKDemo
- Build to verify:
A successful build confirms your environment is ready.dotnet build
Configuring the Semantic Kernel Builder
Next, add the kernel package:
dotnet add package Microsoft.SemanticKernel
Open Program.cs
and configure the kernel builder:
using Microsoft.SemanticKernel;
var builder = new KernelBuilder();
// Load API key securely from environment
var apiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY");
if (string.IsNullOrEmpty(apiKey))
{
Console.WriteLine("Please set the OPENAI_API_KEY environment variable.");
return;
}
// Register OpenAI chat completion service
builder.Services.AddOpenAIChatCompletion("gpt-3.5-turbo", apiKey);
This code ensures your key is not hard-coded and adds resiliency by validating the environment setup. You can swap "gpt-3.5-turbo"
for any other model supported by your account or provider.
Building Prompt-Based Plugins
Prompt plugins consist of a JSON config and a text prompt file. They instruct the kernel how to call the language model.
Defining Plugin Schema
Inside your project root, create:
plugins/
FactmanPlugin/
FindMyth/
BustMyth/
AdaptMessage/
Each folder will contain:
config.js
: Metadata for the kernel (schema version, description, execution settings, input variables).SKPrompt.txt
: A template prompt to send to your AI model.
Creating Prompt Plugins with Variables
- FindMyth
config.js:
SKPrompt.txt:{ "schema": 1, "description": "Find a common myth about AI", "maxTokens": 300, "temperature": 0.8 }
List a single common myth about artificial intelligence. Ensure the myth is family-friendly and does not reference real individuals.
- BustMyth
config.js:
SKPrompt.txt:{ "schema": 1, "description": "Provide factual information to debunk the myth", "inputVariables": ["myth"] }
Myth: {{myth}} Provide a concise, fact-based rebuttal. Use clear examples and keep it appropriate for all audiences.
- AdaptMessage
config.js:
SKPrompt.txt:{ "schema": 1, "description": "Transform the rebuttal into a social post", "inputVariables": ["input", "platform"] }
You are an influencer on {{platform}}. Begin with "Factman to the rescue!" then craft an engaging post using the idea below: {{input}}
By defining inputVariables
, the kernel will accept dynamic data at runtime, making these plugins highly reusable.
Developing a Native Function Plugin
While prompt plugins drive language model calls, native plugins handle custom logic in C#:
Create plugins/SocialPlugin.cs
:
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.SemanticFunctions;
public class SocialPlugin
{
[KernelFunction("Simulate posting to social media.")]
public void Post(string message, string platform)
{
// Simulate network latency
Thread.Sleep(500);
Console.WriteLine($"[Posted to {platform}]: {message}");
}
}
The [KernelFunction]
attribute registers this method with the kernel so it can be invoked just like prompt plugins.
Assembling and Invoking Plugins
With all plugins in place, modify Program.cs
to load them and execute in sequence:
builder.AddPlugin<SocialPlugin>(); // Load native C# plugin
builder.AddPromptPlugin("plugins/FactmanPlugin"); // Load prompt plugins
var kernel = builder.Build();
// Step 1: Find a myth
var myth = await kernel.InvokeAsync<string>("FactmanPlugin.FindMyth");
Console.WriteLine($"Myth: {myth}");
// Step 2: Bust the myth
var busted = await kernel.InvokeAsync<string>("FactmanPlugin.BustMyth", new { myth });
Console.WriteLine($"Rebuttal: {busted}");
// Step 3: Adapt for social media
var postContent = await kernel.InvokeAsync<string>(
"FactmanPlugin.AdaptMessage",
new { input = busted, platform = "Twitter" }
);
Console.WriteLine($"Adapted Post: {postContent}");
// Step 4: Simulate posting
await kernel.InvokeAsync("SocialPlugin.Post", new { message = postContent, platform = "Twitter" });
This flow demonstrates chaining multiple AI calls with a clear data pipeline, all orchestrated by the kernel.
Running and Testing Factman
Execute your application:
dotnet run
Observe console logs for each step. If you see authentication errors, re-check that your OPENAI_API_KEY
is set and that you have network access. For debugging, you can adjust temperature
or maxTokens
in your plugin configs to fine-tune the creative style or response length.
Conclusion
You’ve now created a modular, AI-powered C# application using Semantic Kernel with both prompt-based and native plugins. This architecture scales nicely—add more plugins, integrate new services, or swap models without rewriting core logic.
- Actionable takeaway: Pick a simple AI use case today and implement it as a plugin. Iterate on your prompt, test end-to-end, and expand your kernel with additional functionality.
What creative AI scenario will you tackle next?