Stay Updated with DSPyWeekly!
📚 Articles
Launching DSPy Interview Series on Youtube
I’ve kicked off a new DSPy Interview Series to spotlight the people building and pushing the DSPy ecosystem forward. In the first episode, I sat down with Vikram — the creator of AX-LLM, the TypeScript port of DSPy. We talked about his journey, why he built AX-LLM, the whole signature-abstraction thing, and what he’s planning next for the project. If you want to catch the future interviews as they drop, hit subscribe and tap the bell icon so you don’t miss the next one.
GEPA vs Prompt Learning: Benchmarking Different Prompt Optimization Approaches
We introduced Prompt Learning on July 18, 2025 — an approach that builds simple feedback loops for optimizing LLM applications. A week later, DSPy released GEPA, another powerful prompt optimizer built around similar principles. This post compares Prompt Learning and GEPA, diving into the overall optimization philosophy, usability, and benchmarking.
DSPy Status Streaming
Agents are becoming more capable but slower: while simple chatbots respond in about two seconds, agents that perform tasks like web search, database queries, and synthesis may take 15–30 seconds. Users tolerate this delay when they receive clear feedback, which is why modern interfaces display intermediate status updates such as “Searching the web…” or “Analyzing results…” to make the wait feel transparent rather than uncertain. Implementing this progressive feedback pattern is surprisingly straightforward in DSPy. This post compares Prompt Learning and GEPA, diving into the overall optimization philosophy, usability, and benchmarking.
Case Study: How We Debug 1000s of Databases with AI at Databricks | Databricks Blog
Lessons from building an AI-assisted database debugging platform. Mentions using DSPy.
Parallel and Weaviate integration for a simple QA implementation
This notebook will illustrate how to create an AI system with DSPy that uses Parallel's Chat with the Web API and Weaviate's Query Agent.
Managing Tools in DSPy | Elicited
Building production AI agents with DSPy is straightforward—until you need multiple specialized submodules, each with its own tools and behaviors. At that point, codebases tend to devolve into stringly-typed tool names, hidden dependencies, and orchestration spaghetti. This post presents an architecture that keeps things clean: explicit contracts, type safety, and a clear separation between program logic and runtime environments.
Engineering a human-aligned LLM evaluation workflow with Prodigy and DSPy · Explosion
Building reliable LLM systems isn’t about chasing a “magic prompt” — it’s about a disciplined, iterative development cycle. DSPy treats LLM pipelines as programs to be compiled and optimized, enabling structured experimentation with prompts, metrics, and models. But automated optimization is only as good as the metric behind it, which is why human judgment remains essential in the loop. LLM as a judge + Human feedback. This article is from the folks who gave us spacy. Big fan of them.
🚀 Projects
stogefei/dspy_2
Language: Python | License: Apache License 2.0
Tomas13d/penguiflow-dspy
Penguiflow, DSPy, OpenAI testing and practice | Language: Python
jmorenobl/soni
Soni is a modern framework for building task-oriented dialogue systems that combines the power of DSPy for automatic prompt optimization with LangGraph for robust dialogue management. | Language: Python
CodeReclaimers/opencode-dspy
A naive attempt at generating better prompts for opencode using DSPy | Language: Python | License: Other
tom-doerr/dspy_reasoning
Language: Python | License: MIT License
Thanks for reading DSPyWeekly!