AIInsights - Prompt Management System v2

AIInsights - Prompt Management System v2

Overview

The AI Insights Prompt Management System v2 provides centralized management of AI system prompts and templated user queries for the AI Insights feature. This updated version introduces a combo-based data selection UI, dynamic schema retrieval from the database, role-based access control, and an LLM-only approach for SQL generation. The system allows administrators to configure prompts that guide the AI’s SQL query generation, ensuring consistent, secure, and efficient responses across different business domains.

Why It Is Needed

  1. Selective Prompt Loading: Instead of loading all prompts, only relevant prompts (global + selected table-specific) are used, reducing token consumption and improving performance
  2. Dynamic Schema Management: Schema information is retrieved directly from the database for the selected data type, ensuring accuracy and eliminating hardcoded configurations
  3. Role-Based Security: Users can only access data types for which they have appropriate roles, enhancing data security and compliance
  4. LLM-Only Generation: All queries are generated through AI, eliminating direct SQL execution vulnerabilities while maintaining flexibility
  5. User-Friendly UI: Combo box selection simplifies data type choice and enables role validation
  6. Centralized Management: Easy to update prompts and add new data types without code deployments
  7. Scalability: Simple to extend with new data types and corresponding prompts

Database Schema

SystemPrompts Table

CREATE TABLE AISystemPrompts (
    id INT PRIMARY KEY IDENTITY(1,1),
    ContextTable NVARCHAR(20),  -- Use 'Global' for global prompts, table names for specific prompts
    SysRoleName NVARCHAR(50),  -- Role required to execute this prompt (e.g., DashboardSales, DashboardPurchase)
    PromptCategory NVARCHAR(20),  -- Category of query (Sales, Purchase, GRN, Outstanding, etc.)
    PromptText NVARCHAR(MAX)
);

TemplatedQueries Table

CREATE TABLE AISuggestedPrompts (
    id INT PRIMARY KEY IDENTITY(1,1),
    PromptCategory NVARCHAR(20),
    SuggestedText NVARCHAR(MAX)
);

UI Components

Data Type Selection Combo Box

The AI Insights UI now includes a combo box that allows users to select the type of data they want to interact with:

  • Sales → DashboardSales
  • Purchase → DashboardPurchase
  • GRN → DashboardGrn
  • Indent → DashboardIndent
  • (Additional options can be added as needed)

Role-Based Access Control

Role validation is performed when a user selects a data type from the combo box:

  1. The system maps the selection to the corresponding table name (e.g., “GRN” → “DashboardGrn”)
  2. Validates that the user has the required role matching the table name (e.g., “DashboardGrn” role)
  3. If the user lacks the required role:
    • Disables the Search button
    • Shows access-denied message

Role groups can be organized as:

  • sales_grp: Contains DashboardSales role
  • purchase_grp: Contains DashboardPurchase role
  • grn_grp: Contains DashboardGrn role
  • indent_grp: Contains DashboardIndent role

Autocomplete & Template Prompts

The system supports two types of prompts:

Suggested / Template Prompts

  • Predefined prompts that serve as starting templates
  • Filtered by the selected module (QueryType)
  • Users can select a template and edit it before execution

Ad-hoc User Prompts

  • Custom prompts entered directly by users
  • No predefined structure, full flexibility

Example:

  • Template: Show me top 10 parties for the current year
  • Edited: Show me top 10 parties for 2024

Prompt Execution Flow

Module Selection Validation

  • User selects a module combo (e.g., Sales, Outstanding, GRN, Indent).
  • On combo change:
    • Validate user role against SysRoleName.
    • Disable search if role is missing and show access-denied message.
    • Load prompts only for the selected QueryType.

Cross-Module Protection

  • If the selected module is Sales and the user asks about Purchase:
    • LLM will receive only Sales schema.
    • System must return “No data found”.
    • No incorrect or partial results should be shown.

How Prompts Are Built

Selective System Prompt Construction

The AIInsightsService.GenerateSqlQueryFromPrompt method now builds prompts based on user selection:

  1. User selects data type from combo box (e.g., “Sales”)
  2. Map to table name (e.g., “DashboardSales”)
  3. Role validation ensures user has required role (e.g., “DashboardSales”)
  4. Fetch relevant prompts:
    • Global prompts (where Table = ‘Global’)
    • Table-specific prompts (where Table = selected table name)
  5. Dynamic schema retrieval using database query for selected table
  6. Construct complete prompt with global rules, table-specific instructions, and schema
  7. Send to LLM for SQL generation

Dynamic Schema Query

Instead of loading all schema from appsettings, the system dynamically retrieves schema for the selected table:

SELECT
s.name AS SchemaName,
t.name AS TableName,
c.name AS ColumnName,
ty.name AS DataType,
c.max_length
FROM sys.schemas s
JOIN sys.tables t ON t.schema_id = s.schema_id
JOIN sys.columns c ON c.object_id = t.object_id
JOIN sys.types ty ON ty.user_type_id = c.user_type_id
WHERE t.name = @SelectedTableName
ORDER BY c.column_id;

Simplified Code Example:

public async Task<string> GenerateSqlQueryFromPrompt(string userPrompt, string selectedTable)
{

    // Fetch selective system prompts
    var globalPrompts = await _repo.GetSystemPromptsByTable("Global");
    var tablePrompts = await _repo.GetSystemPromptsByTable(selectedTable);

    // Get dynamic schema
    var schema = await GetTableSchema(selectedTable);

    // Build complete system prompt
    var completeSystemPrompt = string.Join("\n\n",
        globalPrompts.Select(p => p.SystemPrompt)
        .Concat(tablePrompts.Select(p => p.SystemPrompt))
    );

    completeSystemPrompt += $"\n\nSchema:\n{schema}";

    // Continue with OpenAI API call...
}

Credit Management

  • Each month, a scheduled job will reset the user’s balance credits to the maximum assigned credits for that user group.

AI Insights will be allowed only when Sysparameter is ON

Templated Query Execution Flow

Templated queries now serve as predefined prompt starters:

  1. User selects templated query from list filtered by selected data type
  2. Pre-fills prompt input with the UserPrompt text
  3. Proceeds with normal LLM generation (no direct SQL execution)
  4. Displays AI-generated results in standard format

Example Data:

-- AISystemPrompts table data
INSERT INTO AISystemPrompts (ContextTable, SysRoleName, PromptCategory, PromptText) VALUES
('Global', NULL, 'Global', 'GLOBAL BUSINESS RULES - Apply to ALL queries:
- Always use Financial Year (FY) for year-based queries
- FY starts April 1 and ends March 31
- Use proper date filtering with CONVERT(DATE, column)
- Format monetary values to 2 decimal places
- Strip leading/trailing spaces from item and party names');

INSERT INTO AISystemPrompts (ContextTable, SysRoleName, PromptCategory, PromptText) VALUES
('DashboardSales', 'DashboardSales', 'Sales', 'SALES DOMAIN RULES:
- Use dash.DashboardSales table exclusively
- Focus on SalesQty and SalesAmount columns
- Customer analysis should prioritize top customers by revenue');

-- AISuggestedPrompts table data (SuggestedText column)
INSERT INTO AISuggestedPrompts (PromptCategory, SuggestedText) VALUES
('Sales', 'Show monthly sales trend for the current financial year'),
('Sales', 'Top 10 customers by sales amount'),
('Purchase', 'Monthly purchase analysis by supplier');

Logging Strategy (ScheduleLog)

Initial Log Insert (On Search Click)

  • As soon as the user clicks the Search button:
    • Insert a record into ScheduleLog
    • Retrieve generated LogId

Log Update (After LLM Response)

  • When the LLM returns the SQL query:
    • Execute the query
    • Measure execution time
    • Update the same ScheduleLog record using LogId with:
      • Generated SQL query
      • Execution time

Error Handling Rules

  • No Data Found

    • Return only when:
      • Query executes successfully
      • Result count = 0
  • Error

    • SQL syntax issues
    • LLM generation failures
    • DB execution errors
    • Must not be masked as “No data found”

End-to-End Flow Summary

  1. User selects module → role validated
  2. Autocomplete loads module-specific prompts
  3. User selects/edits prompt
  4. Clicks Search → log inserted (Started)
  5. LLM generates SQL using module schema
  6. SQL executed → time measured
  7. Log updated with result / error
  8. Data or “No data found” shown to user

Benefits

  1. Token Efficiency: Selective prompt loading reduces AI costs and improves response times
  2. Enhanced Security: Role-based access prevents unauthorized data access
  3. Dynamic Accuracy: Live schema retrieval ensures queries work with current database structure
  4. User Experience: Intuitive combo selection with immediate validation feedback
  5. Maintainability: Easy to add new data types and prompts without code changes
  6. Scalability: Modular design supports expansion to new business domains
  7. Compliance: Role-based restrictions support data governance requirements
  8. Performance: LLM-only approach eliminates execution overhead of stored queries
  9. Comprehensive Logging: Detailed execution tracking for monitoring and debugging
  10. Robust Error Handling: Clear distinction between no data and actual errors

alt text