Stay Updated with DSPyWeekly!

DSPyWeekly Issue No #10

Published on November 7, 2025

📚 Articles

Bay Area DSPy Meet Up - 18th Nov

​Join us for an evening of talks, discussion, and connection at the Bay Area DSPy Meet Up! ​We've got an incredible lineup of speakers: ​Lakshya Agrawal (GEPA lead author, Berkeley), ​Omar Khattab (DSPy creator, MIT), ​Christopher Potts (Bigspin co-founder, DSPy mentor, Stanford).

PAPER - E-CARE: An Efficient LLM-based Commonsense-Augmented Framework for E-Commerce

In e-commerce, matching user queries to relevant products is crucial for driving purchases, but current methods that rely on Large Language Models (LLMs) for commonsense reasoning can be costly due to real-time inference and annotation needs. To address this, the authors propose **E-CARE (Efficient Commonsense-Augmented Recommendation Enhancer)**, which captures commonsense reasoning in a **factor graph** distilled from powerful LLMs. This allows recommendation models to benefit from LLM-level reasoning with only **one LLM forward pass per query**, significantly reducing computation. Experiments across two e-commerce recommendation tasks show that E-CARE improves performance by up to 12.1% in precision@5. Uses DSPy.

PAPER - Agent-Omni: Test-Time Multimodal Reasoning via Model Coordination for Understanding Anything

The paper introduces Agent-Omni, a framework that overcomes current limitations of multimodal large language models (MLLMs), which typically support only fixed modality pairs and require expensive dataset alignment for training. Instead of building a single unified multimodal model, Agent-Omni uses a master-agent system that coordinates multiple specialized foundation models (agents) across text, image, audio, and video. Uses DSPy

Song on DSPy

Not kidding. "I DSPy optimised all night but my pickle files were empty". I have seen it all now.

Thinking in data

The article argues that the primary power of LLMs in business automation is their ability to solve the main bottleneck: converting "fuzzy," unstructured real-world input (like natural language requests) into structured data (like JSON) that downstream systems can use. From the perspective of DSPy, the article positions it as a superior, systematic solution to this exact problem, moving beyond brittle "prompt engineering."

Ankur Gupta on X: "ReAct (Reason + Act) and tool-calling in DSPy.

code snippet - ReAct to find available domain names for your SAAS app idea.

Tomas Hernando Kofman on X

World walking up to prompt optimization.

🎥 Video

Iryna Kondrashchenko & Oleh Kostromin - Is Prompt Engineering Dead? | PyData Amsterdam 2025

This talk explores various automatic prompt optimization approaches, ranging from simple ones like bootstrapped few-shot to more complex techniques such as MIPRO and TextGrad, and showcases their practical applications through frameworks like DSPy and AdalFlow.

プロンプトエンジニアリングからの脱却!DSPyによるプロンプト最適化入門【アーカイブ

Talk is in Japanese by Tomu Hirata from Databricks In this study session, we will introduce the prompt optimization framework DSPy. This is a co-hosted event with Databricks, which provides integration support for DSPy. The session begins with an introduction to DSPy by Tomu Hirata from Databricks, followed by practical demonstrations of DSPy in action.

Declarative LLM Engineering with DSPy and Dagster

Data Engineering Theatre Thursday, 25th Sep 12:00 - 12:30 Data teams know the pain of moving from proof-of-concepts to ... | Channel: Big Data LDN

Using Stanford DSPy Framework To Train A LLM Model To Reason

Link to Colab Notebook: https://colab.research.google.com/drive/10eybjbFqavP6RLcgNjxp7mwEXb0LNXWj?usp=sharing This ... | Channel: Richard Aragon

🚀 Projects

bryaneburr/dorgy

An LLM-powered command line tool to automatically organize your files and directories | Language: Python | License: MIT License

egpivo/kb-bridge

MCP server for enhancing knowledge base search and retrieval | Language: Python

marcusjihansson/dspy-tree-of-thoughts

DSPy implementation of Princton NLP's Tree of Thought | Language: Python

tosinamuda/llama-finetuning-dspy

DSPy GEPA synthetic data generation for instruction tuning Llama 3 on consulting one-pager and executive slide tasks. Outputs JSONL for AWS SageMaker. Uses uv. Works with watsonx, OpenAI, or OpenRouter backends. Includes prompt optimization, dataset synthesis, and quality checks. | Language: Python

anveshane/smrthi-training-dataset-generator

A DSPy / GEPA based optimiser for generating training dataset for sanskrit FT | Language: Python

aijnek/dspy_multimodal

sample code for automatic multimodal prompt optimization using DSPy | Language: Python

benvenker/dspy-agents

Language: Python