DSPyWeekly Issue No #1
Published on September 4, 2025
📚 Articles
Free Course - DSPy: Build and Optimize Agentic Apps - DeepLearning.AI
This course teaches you how to use DSPy to build and optimize LLM-powered applications. You’ll write programs using DSPy’s signature-based programming model, debug them with MLflow tracing, and automatically improve their accuracy with DSPy Optimizer. Along the way, you’ll see how DSPy helps you easily switch models, manage complexity, and build agents that are both powerful and easy to maintain.
Context Engineering — A Comprehensive Hands-On Tutorial with DSPy
You may have heard about Context Engineering by now. This article will cover the key ideas behind creating LLM applications using Context Engineering principles, visually explain these workflows, and share code snippets that apply these concepts practically.
Learning DSPy: The power of good abstractions • The Data Quarry
In this post, we covered the key abstractions in DSPy, and showed how simple it is to get started. As a developer, you begin by defining signatures, which are a programmatic way to declare your intent and specify the expected input/output types. You then define a custom module (or multiple built-in modules) that call their respective signatures. Signatures and modules depend on adapters under the hood1 to formulate the prompt for the LM to accomplish its task.
dspy-0to1-guide/examples/personas/support_sam.py at main · haasonsaas/dspy-0to1-guide · GitHub
Support-Sam: Customer Support with Knowledge Base This persona demonstrates: - RAG-based customer support - Ticket classification and routing - Response generation with knowledge base - Customer satisfaction scoring - Escalation detection
🎥 Video
Fireside Chat with DSPy Creator w/ Omar Khattab - YouTube
In this interview, Omar Khattab, creator of DSPy, shares his journey from an early interest in programming and AI to pioneering work in information retrieval and modular AI systems. He discusses the influence of BERT on his PhD research at Stanford, which led to projects like ColBERT. Omar explains how DSPy rethinks AI development through modular architectures, signatures, and optimizers—tools that enable clearer task definitions, reasoning, and systematic improvements. He emphasizes the importance of open source, structured code, and prompt optimization for building scalable AI systems.
Automatic and Programmatic Prompt Optimization - Youtube
How to code an automatic prompt optimizer. How the most advanced prompt optimization tool, DSPy, works and how to fully leverage its capabilities. How to perform prompt engineering programmatically in Python, efficiently manipulating prompts with code to create powerful, maintainable, and composable AI programs. As a practical example, this AI tutorial demonstrates how to optimize the extraction of structured data from receipts, improving accuracy from 20% to 100%.
Reflective Optimization of Agents with GEPA and DSPy
By Founder of Matei Zaharia.
DSPy GEPA Example: Listwise Reranker - YouTube
The main takeaway I hope to share is how to monitor your GEPA optimization run to know if you are on the right track, or need to rethink your dataset, etc. 🔬 As GEPA is running, it will log metrics to Weights & Biases. There is the obvious metric to be interested in, the performance on the validation set the current best prompt has achieved. There is also a new concept particular to GEPA that you need to be aware of, the Pareto-Frontier across your validation samples! GEPA achieves diverse exploration of prompts by constructing a Pareto-Frontier where any prompt on the frontier is outperforming the other candidate prompts on at least 1 of your validation samples!
🚀 Projects
GitHub - ax-llm/ax: The pretty much "official" DSPy framework for Typescript
Ax brings DSPy's revolutionary approach to TypeScript – just describe what you want, and let the framework handle the rest. Production-ready, type-safe, and works with all major LLMs.
GitHub - gepa-ai/gepa: Optimize prompts, code, and more with AI-powered Reflective Text Evolution
GEPA (Genetic-Pareto) is a framework for optimizing arbitrary systems composed of text components—like AI prompts, code snippets, or textual specs—against any evaluation metric. It employs LLMs to reflect on system behavior, using feedback from execution and evaluation traces to drive targeted improvements. Through iterative mutation, reflection, and Pareto-aware candidate selection, GEPA evolves robust, high-performing variants with minimal evaluations, co-evolving multiple components in modular systems for domain-specific gains.
GitHub - AdoHaha/dspy_fun
An introduction to DSPy - a framework for programmatically solving tasks with large language models and vision language models.
GitHub - evalops/cognitive-dissonance-dspy: A multi-agent LLM system for detecting and resolving cognitive dissonance.
Most multi‑agent systems resolve contradictions with debate, confidence scores, or arbitration heuristics. That's arm‑wrestling, not ground truth. When a claim is formalizable, we hand it to a theorem prover and return a proof object (or a failure). Stop arguing; prove it. Cognitive Dissonance DSPy detects belief conflicts between LLM agents, translates formalizable claims to Coq, and attempts a machine‑checked proof. If a claim can't be formalized, we say so and punt (fall back to labeled heuristics).
danilotpnta/IR2-project
Reproducibility Study of “InPars Toolkit: A Unified and Reproducible Synthetic Data Generation Pipeline for Neural Information Retrieval” This project focuses on a reproducibility study of the InPars Toolkit, a tool designed for generating synthetic data to improve neural information retrieval (IR) systems. Our objective is to replicate and validate the methodology presented in the paper while improving on the future work proposed by the authors.
Thanks for reading DSPyWeekly!