Building a Simple Semantic Kernel Agent in C#

Introduction

Microsoft’s Semantic Kernel is a powerful framework that enables developers to integrate large language models (LLMs) into their applications seamlessly. Whether you’re building chatbots, content generators, or intelligent automation tools, Semantic Kernel provides the building blocks to create sophisticated AI-powered agents.

In this post, we’ll explore how to build a simple yet effective Semantic Kernel agent in C# that can understand user requests, plan actions, and execute tasks autonomously.

What is Semantic Kernel?

Semantic Kernel is an open-source SDK that allows developers to:

  • Integrate AI services like OpenAI GPT, Azure OpenAI, and other language models
  • Create plugins that extend AI capabilities with custom functions
  • Build AI agents that can plan and execute multi-step tasks
  • Combine traditional programming with AI-powered natural language processing

Think of it as a bridge between your application logic and AI services, providing a structured way to build intelligent applications.

Why Do You Need an Agent?

Traditional AI integrations often involve simple request-response patterns. However, agents take this a step further by:

  • Autonomous Decision Making: Agents can analyze user requests and determine the best course of action
  • Multi-step Planning: They can break down complex tasks into smaller, manageable steps
  • Tool Integration: Agents can use various tools and APIs to accomplish goals
  • Context Awareness: They maintain conversation context and can reference previous interactions

Building a Simple Semantic Kernel Agent

Let’s create a basic agent that can help with file operations and web searches. Here’s a minimal working example:

Step 1: Install Required Packages

First, install the necessary NuGet packages:

dotnet add package Microsoft.SemanticKernel
dotnet add package Microsoft.SemanticKernel.Plugins.Core

Step 2: Create the Agent

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using System.ComponentModel;

public class SimpleSemanticKernelAgent
{
    private readonly Kernel _kernel;
    private readonly IChatCompletionService _chatService;
    private readonly ChatHistory _chatHistory;

    public SimpleSemanticKernelAgent(string apiKey, string model = "gpt-3.5-turbo")
    {
        // Create kernel builder
        var builder = Kernel.CreateBuilder();

        // Add OpenAI chat completion service
        builder.AddOpenAIChatCompletion(model, apiKey);

        // Add plugins
        builder.Plugins.AddFromType<FileOperationsPlugin>();
        builder.Plugins.AddFromType<WebSearchPlugin>();

        // Build kernel
        _kernel = builder.Build();

        // Get chat completion service
        _chatService = _kernel.GetRequiredService<IChatCompletionService>();

        // Initialize chat history
        _chatHistory = new ChatHistory();
        _chatHistory.AddSystemMessage(
            "You are a helpful assistant that can perform file operations and web searches. " +
            "When users ask for help, analyze their request and use the available tools to assist them.");
    }

    public async Task<string> ProcessUserRequestAsync(string userInput)
    {
        try
        {
            // Add user message to history
            _chatHistory.AddUserMessage(userInput);

            // Configure execution settings
            var executionSettings = new OpenAIPromptExecutionSettings
            {
                ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions,
                MaxTokens = 1000,
                Temperature = 0.7
            };

            // Get response from the agent
            var response = await _chatService.GetChatMessageContentAsync(
                _chatHistory, 
                executionSettings, 
                _kernel);

            // Add assistant response to history
            _chatHistory.AddAssistantMessage(response.Content ?? "");

            return response.Content ?? "I'm sorry, I couldn't process your request.";
        }
        catch (Exception ex)
        {
            return $"An error occurred: {ex.Message}";
        }
    }
}

// Example plugin for file operations
public class FileOperationsPlugin
{
    [KernelFunction, Description("Read content from a text file")]
    public async Task<string> ReadFileAsync(
        [Description("Path to the file to read")] string filePath)
    {
        try
        {
            if (!File.Exists(filePath))
                return "File not found.";

            return await File.ReadAllTextAsync(filePath);
        }
        catch (Exception ex)
        {
            return $"Error reading file: {ex.Message}";
        }
    }

    [KernelFunction, Description("Write content to a text file")]
    public async Task<string> WriteFileAsync(
        [Description("Path to the file to write")] string filePath,
        [Description("Content to write to the file")] string content)
    {
        try
        {
            await File.WriteAllTextAsync(filePath, content);
            return "File written successfully.";
        }
        catch (Exception ex)
        {
            return $"Error writing file: {ex.Message}";
        }
    }
}

// Example plugin for web search (simplified)
public class WebSearchPlugin
{
    [KernelFunction, Description("Search the web for information")]
    public async Task<string> SearchWebAsync(
        [Description("Search query")] string query)
    {
        // In a real implementation, you would integrate with a search API
        // like Bing Search API, Google Custom Search, etc.
        await Task.Delay(1000); // Simulate API call

        return $"Search results for '{query}': [This is a simplified example. " +
               "In a real implementation, you would return actual search results.]";;
    }
}

Step 3: Using the Agent

class Program
{
    static async Task Main(string[] args)
    {
        // Initialize the agent with your OpenAI API key
        var agent = new SimpleSemanticKernelAgent("your-openai-api-key-here");

        Console.WriteLine("Semantic Kernel Agent initialized. Type 'exit' to quit.");

        while (true)
        {
            Console.Write("\nYou: ");
            var input = Console.ReadLine();

            if (input?.ToLower() == "exit")
                break;

            if (string.IsNullOrWhiteSpace(input))
                continue;

            Console.Write("Agent: ");
            var response = await agent.ProcessUserRequestAsync(input);
            Console.WriteLine(response);
        }
    }
}

Important Tips for Success

When building Semantic Kernel agents, keep these best practices in mind:

1. Design Clear Function Descriptions

  • Use descriptive function names and detailed descriptions
  • Provide clear parameter descriptions
  • Include examples in your documentation

2. Handle Errors Gracefully

  • Always wrap plugin functions in try-catch blocks
  • Return meaningful error messages
  • Log errors for debugging purposes

3. Optimize Performance

  • Use appropriate token limits to control costs
  • Implement caching for frequently used data
  • Consider using streaming responses for long operations

4. Security Considerations

  • Validate all inputs to your plugins
  • Implement proper authentication and authorization
  • Be cautious with file system access and external API calls
  • Never expose sensitive information in function descriptions

5. Testing and Monitoring

  • Test your agent with various input scenarios
  • Monitor token usage and API costs
  • Implement logging to track agent behavior
  • Use A/B testing to improve agent responses

Summary

Semantic Kernel agents represent a powerful way to build intelligent applications that can understand natural language, plan actions, and execute tasks autonomously. The example we’ve built demonstrates the core concepts:

  • Kernel Configuration: Setting up the AI service and plugins
  • Plugin Development: Creating custom functions the agent can use
  • Conversation Management: Maintaining context across interactions
  • Error Handling: Gracefully managing failures and edge cases

With these foundations, you can extend the agent to support more complex scenarios, integrate with additional APIs, and create sophisticated AI-powered applications that truly understand and assist your users.

The future of software development increasingly involves AI collaboration, and Semantic Kernel provides an excellent framework for building these intelligent partnerships. Start simple, iterate quickly, and gradually add more capabilities as your understanding and requirements grow.

Backup Files from Ubuntu to Azure

Hey there, how are you doing? I have been busy setting up a VPS (not in Azure) with PostgreSQL and wanted to have a recovery mechanism set for it. Because the VPS is not in Azure, I have zero options that would allow me to take a snapshot or backup of the server. So, I decided to write a simple script allowing me to take backups of files and upload them to Azure Blob Storage using Azure CLI. If you want to use something other than Azure, feel free to do so. I use Azure on my account, and the charges are low. You can check the pricing here.

Alright then, let’s start taking a backup of the database using the pgdump command:

#$USERNAME is the admin username that you have set
#$DATABASE_NAME is the Database name
#$BACKUP_FILE is the path where the file will be dumped
pg_dump -U $USERNAME -d $DATABASE_NAME > $BACKUP_FILE

Now that we have a way to export a dump of the database, let’s go ahead and identify the Access Key for the Container where I want to store the files. This can be found in the Access keys within the Security + networking section of the Storage Account. If you want to learn more about Azure Blob Storage, please visit the documentation.

Here, either of the key1 or key2 values can be used.

Next, to use this, we need to install the Azure CLI. Let’s execute this command to install the Azure CLI:

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
az --version

A successful installation would result in az --version giving the version info.

Now that the dependencies are sorted out, we can look into the backup script. Note that this script can be modified to back up any files. Save this file as backup_script.sh in /home/vps_backup/scripts directory.

#!/bin/bash

# Set the password for PostgreSQL user
export PGPASSWORD='YOUR_PASSWORD'

# Replace these values with your actual Azure Blob Storage account details
AZURE_STORAGE_ACCOUNT="MYBACKUP"
AZURE_STORAGE_ACCESS_KEY="STORAGE_ACCESS_KEY"
CONTAINER_NAME="backups"

# Define variables for the backup
USERNAME=postgres
DATABASE_NAME=mydatabse
# Get the current date and time
today=$(date +"%Y-%m-%d_%H-%M-%S")
todayDate=$(date +"%Y-%m-%d")

# Set the filenames to todays date. 
BACKUP_FILE=/home/vps_backup/backups/backup_$today.sql
BACKUP_ZIPFILE=/home/vps_backup/backups/backup_$today.tar.gz

# Perform the backup using pg_dump
pg_dump -U $USERNAME -d $DATABASE_NAME > $BACKUP_FILE

# Unset the password to clear it from the environment
unset PGPASSWORD

# Generating a compressed file using tar
tar -czvf $BACKUP_ZIPFILE $BACKUP_FILE

# Upload the backup files to Azure Blob Storage using Azure CLI
# using -d for directory using the $todayDate variable to store the files based on dates
az storage blob directory upload --account-name $AZURE_STORAGE_ACCOUNT --account-key $AZURE_STORAGE_ACCESS_KEY --container $CONTAINER_NAME --source $BACKUP_ZIPFILE -d $todayDate

The final step is to set the cron so that our script gets executed every hour.

crontab -e
0 * * * * /home/vps_backup/scripts/backup_script.sh

Although I have used Azure CLI for Azure, you can use any medium to store your files, such as a database dump from PostgreSQL, MongoDB, SQL, MySQL, etc., or any other file.

Here is a snapshot of the Azure Storage Browser showing the Container with a specific date directory:

Deconstructing TV Shows Reminder

TV Shows Reminder - Get Reminders for your Favorite TV Shows
TV Shows Reminder - Get Reminders for your Favorite TV Shows

In this post, we’ll dive deep into how TV Shows Reminder is architected, exploring everything from the frontend to backend, infrastructure choices, and integrations that make the WebApp perform smoothly.

Introduction

TV Shows Reminder is designed to help users effortlessly keep track of their favorite TV shows, receiving timely notifications about upcoming episodes. The architecture behind this app blends modern frontend technologies, robust backend services, and cloud infrastructure, ensuring scalability, performance, and security.

Architecture Overview

At a high level, TV Shows Reminder employs a microservices-inspired architecture. The frontend uses ReactJS with Redux for state management, the backend relies on a .NET WebAPI, and Strapi is utilized as a separate content management service, all orchestrated seamlessly through Cloudflare’s infrastructure and various Azure services.

Frontend: ReactJS, Redux, TailwindCSS

The frontend is built with ReactJS, providing a responsive and dynamic user experience. TypeScript ensures type safety and robustness, minimizing bugs at compile-time. TailwindCSS offers a highly maintainable styling solution.

State management is streamlined with Redux, offering predictable state transitions. Data fetched from APIs and search results are cached in local storage, significantly enhancing response times.

The app leverages Cloudflare Pages for hosting, combined with Cloudflare Workers and Workers KV for serving static data such as show details, seasons, and episodes, minimizing backend hits and ensuring rapid content delivery.

Backend and Data Management

The backend services are powered by ASP.NET WebAPI (.NET 8), hosted on Ubuntu servers. Crucial data is cached using Redis, dramatically improving response times and minimizing database latency.

Strapi acts as a separate microservice, managing user-related information and homepage content. This modular approach helps maintain separation of concerns, easy updates, and better security by abstracting unnecessary details away from the frontend.

Firebase Authentication simplifies user credential management, eliminating the overhead of storing sensitive data on internal servers.

Image Handling and Optimization

Images are managed via Imbo, an image server deployed on DigitalOcean Spaces. Imbo offers real-time image resizing and manipulation capabilities, ensuring optimal image delivery speed and size.

Metadata for image lifecycle management is stored in Azure Table Storage, where image identifiers track images older than 60 days, enabling timely cleanup. Meanwhile, Imbo maintains duplicate copies of these images in DigitalOcean Spaces until deletion, ensuring consistency and availability.

Search and External Integrations

Search functionality integrates directly with TMDB’s robust search engine. Results are combined with optimized images from Imbo, ensuring accuracy and visual appeal.

When searches occur, resulting show IDs are queued into Azure Service Bus. This action triggers a .NET-based Worker Service deployed on a separate Ubuntu server. This service fetches detailed show data and updated images through imgcdn.in, communicating with both Imbo and the primary .NET WebAPI.

Notification Management

Notifications are orchestrated via Azure Logic Apps, triggered every 12 hours, and run on Azure App Services. This service processes upcoming shows and user subscriptions to generate personalized email notifications.

To prevent duplication and ensure robustness, notification data is first stored in MongoDB. Emails are dispatched primarily through SendGrid, with AWS Simple Email Service (SES) serving as a reliable backup.

Furthermore, OneSignal enables browser-based push notifications, extending user engagement beyond emails.

Security and Performance

Security is integral to the system architecture. JWT tokens and TOTP codes protect API endpoints, preventing replay attacks and ensuring authenticated access.

Redis caching dramatically reduces latency, enabling faster response times from backend services. Cloudflare Workers play an additional crucial role, managing caching and API security efficiently, offering protection against common web vulnerabilities.

Lessons Learned and Future Improvements

Building TV Shows Reminder provided several key insights:

  • Separation of frontend and backend services significantly eases development and maintenance.
  • Leveraging dedicated microservices like Strapi can significantly simplify content and user-data management.
  • Caching and image optimization considerably enhance performance and scalability.

Looking ahead, there is potential for:

  • Enhanced automation and refinement in image lifecycle management.
  • Integration of machine learning for personalized user recommendations.
  • Continuous improvements to the notification engine to deliver even more targeted and timely alerts.

Conclusion

TV Shows Reminder exemplifies a well-thought-out, scalable architecture using modern frontend frameworks, robust backend services, and strategic cloud integrations. The blend of best practices ensures an optimal user experience and a maintainable codebase poised for future growth and enhancements.