

AI in modern applications must go beyond simple prompts to deliver real value—it needs to reason over your data, respect context, and integrate with external systems. In this fast-paced 90-minute workshop, you’ll learn how to power Spring Boot applications with Spring AI. We’ll start with chat completions using ChatClient, then ground responses with Retrieval-Augmented Generation (RAG). From there, we’ll extend the application with Tools and the Model Context Protocol (MCP)—standardizing how AI models call APIs, query data, and orchestrate real-world workflows. By the end, you’ll walk away with a working Spring AI project and a clear roadmap for building production-ready AI agents..
Prerequisites
Spring AI supports a wide range of LLM providers. For this workshop, you can use any provider, but we have included instructions for three popular options below. For a complete list of supported providers and their specific configurations, please refer to the official Spring AI documentation.
Choose one of the following options to configure your connection to a Large Language Model (LLM). Option A: OpenAI Option B: Anthropic
During the workshop, we will connect our Spring AI application to the GitHub MCP Server to interact with GitHub repositories. This requires a GitHub Personal Access Token (PAT).
Create a Personal Access Token: Follow the official GitHub documentation to create a “classic” token.
Verify npx is installed:
Run the command: npx –version
Installing PostgreSQL with pgvector Extension The pgvector extension adds vector data type support to PostgreSQL — required for storing and searching embeddings in AI applications. Below are instructions to install PostgreSQL with pgvector enabled on Windows, macOS, and Docker.
Verification
If it inserts and selects successfully — your Postgres is now vector-enabled 🎉
More details refer: https://docs.google.com/document/d/1t4NLKcxI8DiuKzEOuMVZN6t8Qbz0ML595xrFBr8naYQ/edit?tab=t.0