Skip to content

LLM

DSPy, a machine learning framework for Language Models

What is DSPy ?

A very quick description would be something like: building end-to-end Language Model (LM) applications by assembling various components without any prompt engineering... At least, that's one of the main purposes of this framework.

DSPy is a machine learning (ML) framework created by the Stanford NLP community. Thus, it utilizes the same principles used in ML, including a training dataset, a model, a loss function, and an optimizer.

So, what components can we assemble to create an LM/LLM app with DSPy ? - Signature Component: Allows explicit specification of the application's input and output. - Module Component: Contains a prompting technique with an already crafted prompt. - Metric Component: Allows specification of the loss function we want to minimize for a specific task/use case. - Optimizer Component: Contains various optimization techniques to optimize the prompt and other parameters.

These components might not be easy to understand at first glance, especially the optimizer. So, let's dive in and see how each component works individually and how they operate once assembled.

New frameworks of Generative AI

Introduction

Open-source frameworks have always been crucial for data scientists, with tools like pandas for data manipulation and scikit-learn for modeling. Recently, new frameworks have emerged in the field of generative AI (and it's not over yet...), aiming to facilitate the development, deployment, and monitoring of generative AI applications. These frameworks offer useful features for fine-tuning LLM, for building RAG architecture, for improving prompts, or for simply making an API call to one of our favorite LLMs with default parameters already in place (pretty simple, right? 🙂).