Programmatic API

Using as a Library

AI Guard can be used programmatically for custom integrations:

typescript
import { loadConfig, analyzeProject, ALL_RULES } from "@promise-inc/ai-guard";
async function checkCode() {
// Load configuration from project root
const config = await loadConfig(process.cwd());
// Analyze entire project
const result = await analyzeProject(process.cwd(), config);
// Any violations? Report them
if (result.violations.length > 0) {
console.error(`Found ${result.violations.length} violations`);
// Group violations by file
const byFile = result.violationsByFile;
for (const [filePath, violations] of Object.entries(byFile)) {
console.log(`\n${filePath}`);
for (const violation of violations) {
console.log(` Line ${violation.line}: [${violation.rule}] ${violation.message}`);
}
}
process.exit(1);
}
console.log("No violations found!");
}
checkCode();

Core Functions

AI Guard exports several functions for programmatic use:

typescript
// Load configuration from project directory
async function loadConfig(cwd: string): Promise<AiGuardConfig>;
// Analyze entire project
async function analyzeProject(
cwd: string,
config: AiGuardConfig
): Promise<AnalysisResult>;
// Analyze specific files
async function analyzeFiles(
files: string[],
config: AiGuardConfig
): Promise<AnalysisResult>;
// Get git staged files
async function getStagedFiles(cwd: string): Promise<string[]>;

Types

Key types for working with the API:

typescript
interface AnalysisResult {
violations: Violation[];
violationsByFile: Record<string, Violation[]>;
filesAnalyzed: number;
filesWithViolations: number;
}
interface Violation {
file: string;
line: number;
rule: string;
message: string;
severity: "error" | "warning";
}
interface AiGuardConfig {
maxCommentsRatio: number;
forbidGenericNames: string[];
maxFunctionLines: number;
aiPatterns: { forbid: string[] };
architecture: Record<string, string[]>;
include: string[];
exclude: string[];
}

Custom Integration Example

Build a custom tool that analyzes code and generates a report:

typescript
import { analyzeProject, loadConfig, getStagedFiles } from "@promise-inc/ai-guard";
import fs from "fs/promises";
async function generateReport(staged = false) {
const cwd = process.cwd();
const config = await loadConfig(cwd);
let result;
if (staged) {
const files = await getStagedFiles(cwd);
result = await analyzeFiles(files, config);
} else {
result = await analyzeProject(cwd, config);
}
// Generate JSON report
const report = {
timestamp: new Date().toISOString(),
filesAnalyzed: result.filesAnalyzed,
filesWithViolations: result.filesWithViolations,
totalViolations: result.violations.length,
violations: result.violations,
};
await fs.writeFile(
"ai-guard-report.json",
JSON.stringify(report, null, 2)
);
console.log(`Report saved to ai-guard-report.json`);
return result.violations.length === 0;
}
generateReport(process.argv.includes("--staged"));

Available Rules

Access the full list of built-in rules:

typescript
import { ALL_RULES } from "@promise-inc/ai-guard";
console.log(ALL_RULES);
// [
// 'excessive-comments',
// 'obvious-comments',
// 'generic-names',
// 'large-functions',
// 'ai-patterns'
// ]
The programmatic API is useful for editor extensions, custom CI integrations, or building AI Guard into development tools.