1.5 KiB
1.5 KiB
Prompt Assembly
Utilities for packaging database context into a single LLM prompt.
Files
prompt_builder.py: Loads schema/glossary/value hints/example artifacts and produces a structured prompt for SQL generation.__init__.py: Exposesload_contextandbuild_prompthelpers.
Quick Start
- Ensure context artifacts exist under
db_agent/context/(run the extractor, curate glossary/value hints/examples). - In Python, build the prompt:
from db_agent.prompting import load_context, build_prompt ctx = load_context() prompt = build_prompt( question="Which vehicles hauled more than 30 tons last week?", context=ctx, table_hints=["dbo.SugarLoadData", "dbo.SugarVehicles"], ) print(prompt) - Send the prompt to your LLM provider (OpenAI, Azure OpenAI, Anthropic, etc.) and parse the JSON response with the
sqlandsummaryfields.
Configuration
- Customize file paths by instantiating
PromptConfigwith alternate filenames or directories. - Use
table_hintsto restrict the schema section to relevant tables (helps stay within context limits). - Adjust
max_columnsto control how many column definitions are emitted per table.
Next Steps
- Integrate the prompt builder into a service layer that calls the LLM and executes the generated SQL with validation.
- Add retrieval heuristics to automatically pick tables based on keyword matching instead of manually passing
table_hints. - Extend the format to include recent query logs or execution feedback for reinforcement.