An intelligent customer support application that helps users manage their hotel bookings through a conversational AI interface. Built with Spring AI, this application demonstrates the power of AI-driven customer service, enabling customers to view, modify, and cancel their hotel reservations through natural language conversations.
- Project Overview
- Project Structure
- Main Features
- Technology Stack
- Getting Started
- Using the Application
- Docker Support
- What's Next
- License
- Author
This application showcases an AI-powered customer support system for hotel booking management. It leverages Spring AI with OpenAI's GPT models to create an intelligent chatbot that can:
- Retrieve booking information securely after validating customer credentials
- Modify room types based on availability and terms of service
- Cancel bookings according to the hotel's cancellation policy
- Answer questions about booking terms and policies using RAG (Retrieval-Augmented Generation) with file-backed persistence
The system uses a modern tech stack combining Spring Boot for the backend, Vaadin for the UI, and Spring AI for intelligent conversation handling.
Hotel-Booking-Customer-Support/
├── .mvn/ # Maven wrapper configuration
├── src/
│ ├── main/
│ │ ├── frontend/ # React/TypeScript frontend components
│ │ │ ├── components/ # Reusable UI components (Message, MessageList)
│ │ │ ├── views/ # Main application views
│ │ │ ├── generated/ # Vaadin auto-generated React integration code
│ │ │ └── index.html # Frontend entry point html
│ │ ├── java/
│ │ │ └── rs/siriusxi/hbca/ # Main application package
│ │ │ ├── HCSAApplication.java # Spring Boot entry point & vector store initialization
│ │ │ ├── config/ # Spring configuration classes
│ │ │ │ ├── AppConfig.java # Application-wide configuration
│ │ │ │ └── BookingToolsConfig.java # AI function calling configuration
│ │ │ ├── domain/ # Domain entities
│ │ │ │ ├── Booking.java # Hotel booking entity
│ │ │ │ ├── BookingStatus.java # Booking status enum
│ │ │ │ ├── Customer.java # Customer entity
│ │ │ │ └── RoomType.java # Room type enum
│ │ │ ├── repository/ # Data access layer
│ │ │ │ ├── BookingRepository.java # Spring Data JPA Booking repository
│ │ │ │ └── CustomerRepository.java # Spring Data JPA Customer repository
│ │ │ ├── service/ # Business logic layer
│ │ │ │ ├── ai/
│ │ │ │ │ └── CustomerSupportAssistant.java # AI chat client with advisors
│ │ │ │ ├── mapper/
│ │ │ │ │ └── BookingDetailsMapper.java # MapStruct mapper for booking entities
│ │ │ │ └── HotelBookingService.java # Booking management operations
│ │ │ └── ui/ # UI service endpoints
│ │ │ ├── dto/
│ │ │ │ └── HotelBookingDetail.java # Booking DTO
│ │ │ ├── AssistantUIService.java # Chat endpoint for frontend
│ │ │ └── HotelBookingUIService.java # Booking data endpoint
│ │ └── resources/
│ │ ├── booking-terms.txt # Hotel terms & conditions for RAG
│ │ └── SystemMessage.st # AI system message template
│ └── test/ # Test resources and classes
├── target/ # Compiled classes and build artifacts
├── node_modules/ # NPM dependencies
├── pom.xml # Maven project configuration
├── package.json # Node.js dependencies for frontend
├── Dockerfile # Docker image configuration
├── tsconfig.json # TypeScript configuration
├── vite.config.ts # Vite build tool configuration
├── AGENTS.md # AI development guidance
├── LICENSE # MIT License
└── README.md # This file
- Natural language processing for customer queries
- Context-aware responses using chat memory and advisors
- Friendly and helpful tone matching hotel customer service standards
- Validates customer identity before showing booking details
- Requires booking number, first name, and last name verification
- Prevents unauthorized access to sensitive booking information
- Automatic function execution based on customer intent
- Parallel function calling support for efficiency
- Tools for booking cancellation and room type modifications
- Vector store integration for booking terms and conditions
- File-backed
SimpleVectorStorefor persistent knowledge storage - Intelligent policy lookup before allowing booking changes
- Ensures compliance with hotel policies automatically
- Reactive streaming responses for better user experience
- Markdown support for formatted responses
- Message history preservation across the conversation
- React-based frontend with TypeScript
- Spring Boot backend with reactive programming
- Vaadin Hilla for seamless frontend-backend integration
- MapStruct for efficient object-to-object mapping
Spring Boot 4.1 introduces an official OpenTelemetry starter from the Spring team. Unlike previous approaches that required multiple dependencies and complex configuration, this starter provides:
How was this possible? The modularization of Spring Boot in version 4.1 enabled the team to create focused, optional starters like this one. To learn more about Spring Boot 4's modular architecture, check out the modularization examples.
- Single dependency: Just add
spring-boot-starter-opentelemetry - Automatic OTLP export: Metrics and traces are exported via the OTLP protocol
- Micrometer integration: Uses Micrometer's tracing bridge to export traces in OTLP format
- Vendor-neutral: Works with any OpenTelemetry-capable backend (Grafana, Jaeger, etc.)
There are three ways to use OpenTelemetry with Spring Boot:
- OpenTelemetry Java Agent - Zero code changes but can have version compatibility issues
- Third-party OpenTelemetry Starter – From the OTel project, but pulls in alpha dependencies
- Spring Boot Starter (this demo)— – Official Spring support, stable, well-integrated
The key insight is that it's the protocol (OTLP) that matters, not the library. Spring Boot uses Micrometer internally but exports everything via OTLP to any compatible backend.
Spring Boot Actuator is Spring's traditional approach to observability and production readiness. Here's how it compares:
| Aspect | Spring Boot Actuator | OpenTelemetry Starter |
|---|---|---|
| Protocol | Prometheus, OTLP, JMX, + many others | OTLP (vendor-neutral) |
| Distributed Tracing | Built-in via Micrometer Tracing (add bridge dependency) | Built-in, automatic |
| Backend Lock-in | Vendor-neutral via Micrometer (supports 15+ backends including OTLP) | Works with any OTLP backend |
| Health Checks | Built-in /actuator/health |
Not included (requires Actuator) |
| Production Readiness | Full suite (info, env, beans, metrics, etc.) | Focused on telemetry only |
| Setup Complexity | More endpoints to configure/secure | Single OTLP endpoint |
| Dependencies (Spring Boot 4.1) | spring-boot-starter-actuator + bridge deps |
spring-boot-starter-opentelemetry |
Choose Actuator when:
- You need health checks, readiness/liveness probes for Kubernetes
- You want to expose application info, environment, or bean details
- Your monitoring stack is already Prometheus-based with scraping
Choose OpenTelemetry Starter when:
- You want vendor-neutral observability (easily switch backends)
- Distributed tracing across services is a priority
- You prefer push-based telemetry to pull-based scraping
Note: They're not mutually exclusive—many production apps use both (Actuator for health/readiness, OTel for telemetry).
- Java 25 – Latest Java LTS with preview features enabled
- Spring Boot 4.1.0-M1 – Application framework
- Spring AI 2.0.0-M2 - AI integration framework
- OpenAI integration for chat completions
- Vector store support for RAG
- Chat memory for conversation context
- Function calling for tool use
- Chat Advisors support
- Spring Data JPA - Data persistence
- H2 Database – Local file-backed database for persistence (stored in
./store/data/hbca) - Flyway - Database migration tool
- MapStruct 1.6.3 - Java bean mappings
- Jackson 2.19.0 – JSON processing
- Lombok - Reduces boilerplate code
- JUnit 5 – Unit testing framework
- Docker-compose (for Grafana LGTM stack)
- The LGTM stack is Grafana Labs' open-source observability stack. The acronym stands for:
- Loki — for logs (log aggregation system)
- Grafana — for visualization and dashboards
- Tempo — for traces (distributed tracing backend)
- Mimir — for metrics (long-term storage for Prometheus metrics)
- The LGTM stack is Grafana Labs' open-source observability stack. The acronym stands for:
- React 19.2.3 – UI library
- TypeScript 5.9.3 - Type-safe JavaScript
- Vaadin 25.0.3 - Full-stack framework
- Hilla for React integration
- React Components & Components Pro
- Vite 7.3.0 – Fast build tool
- react-markdown - Markdown rendering in chat
- Maven 3.x - Build automation
- Docker – Containerization
- Vaadin Maven Plugin - Frontend build integration
Before running this application, ensure you have:
-
Java Development Kit (JDK) 25 or higher
java -version
-
Maven 3.6+ (or use the included Maven wrapper)
mvn -version
-
Node.js 18+ and npm (for frontend build)
node --version npm --version
-
OpenAI API Key – Required for AI functionality
- Sign up at OpenAI
- Generate an API key from your account dashboard
git clone https://github.com/mohamed-taman/Hotel-Booking-Customer-Support.git
cd Hotel-Booking-Customer-SupportThe application uses Flyway for database migrations. The JDBC chat memory schema initialization is handled by Flyway (V1__Create_schema.sql), so Spring AI's automatic schema initialization is disabled in application.yaml.
Create an application.properties or application.yml file in src/main/resources/ with your OpenAI API key:
Option 1: application.properties
# OpenAI Configuration
spring.ai.openai.api-key=${OPENAI_API_KEY}
spring.ai.openai.chat.options.model=gpt-4
spring.ai.openai.chat.options.temperature=0.7
#Open telemetry Configuration
# 100% sampling for development
management.tracing.sampling.probability=1.0
# Exporting metrics
management.otlp.metrics.export.url=http://otlp.example.com:4318/v1/metrics
# Exporting traces
management.opentelemetry.tracing.export.otlp.endpoint=http://localhost:4318/v1/traces
# Exporting logs
management.opentelemetry.logging.export.otlp.endpoint=http://localhost:4318/v1/logs
# Server Configuration (optional)
server.port=8080
# Logging (optional)
logging.level.rs.siriusxi.hbca=INFO
logging.level.org.springframework.ai=DEBUG- sampling.probability: Set to
1.0for development (all traces). Use lower values in production (default is0.1) - Port 4318: HTTP OTLP endpoint (use 4317 for gRPC)
- The
spring-boot-docker-composemodule auto-configures these endpoints when using Docker Compose
-
management.otlp.metrics.export.url: Tells Spring Boot where to send metrics (counts, gauges, histograms like request counts, response times, memory usage). The data goes to an OTLP-compatible collector. -
management.opentelemetry.tracing.export.otlp.endpoint: Tells Spring Boot where to send traces (timing/flow data showing how requests move through your app, spans showing each operation and duration).
Why two separate configs? Spring Boot's observability evolved over time:
- Metrics use Micrometer's OTLP exporter (hence
management.otlp.metrics) - Traces use the OpenTelemetry tracing bridge (hence
management.opentelemetry.tracing)
Both send data to the same collector (port 4318), but the configuration paths differ due to how the libraries are integrated.
You can 👀 logs with OpenTelemetry, metrics, and traces in 'Grafana' at http://localhost:3000/
For more details, see the OpenTelemetry with Spring Boot blog post.
Option 2: Environment Variable
export OPENAI_API_KEY=your-api-key-hereFor Windows:
set OPENAI_API_KEY=your-api-key-hereBuild the application and download all dependencies:
# Using Maven wrapper (recommended)
./mvnw clean install
# Or using system Maven
mvn clean installThis will:
- Compile Java sources
- Download all Maven dependencies
- Install Node.js dependencies
- Build the frontend with Vite
- Run tests
- Package the application
Development Mode (with hot reload):
# Using Maven wrapper
./mvnw spring-boot:run
# Or simply (default goal is spring-boot:run)
./mvnwProduction Mode:
# Build production package
./mvnw -Pproduction package
# Run the JAR
java --add-opens java.base/sun.misc=ALL-UNNAMED \
--add-opens java.base/java.nio=ALL-UNNAMED \
-jar target/hbca-2.0-SNAPSHOT.jarThe application will start on http://localhost:8080
Execute the test suite:
# Run all tests
./mvnw test
# Run specific test class
./mvnw test -Dtest=HotelBookingServiceTest
# Run with coverage
./mvnw clean verify- Open your browser and navigate to
http://localhost:8080 - You'll see a chat interface with a message input
The application comes pre-loaded with sample bookings for testing:
| Booking # | First Name | Last Name | Hotel | Room Type | Check-in | Check-out | Guests | Status |
|---|---|---|---|---|---|---|---|---|
| 101 | Jack | Bauer | Marriot | KING | Today | Today+2 | 2 | ✅ |
| 102 | Chloe | O'Brian | Hilton | QUEEN | Today+2 | Today+4 | 1 | ✅ |
| 103 | Kim | Bauer | Sheraton | DOUBLE | Today+4 | Today+6 | 2 | ✅ |
| 104 | David | Palmer | Westin | SUITE | Today+6 | Today+8 | 3 | ✅ |
| 105 | Michelle | Dessler | Four Seasons | KING | Today+8 | Today+10 | 1 | ✅ |
Example 1: View Booking
You: Hi, I'd like to check my booking
AI: Hello! I'd be happy to help you with that. To access your booking
information, I'll need:
1. Your booking number
2. Your first name
3. Your last name
You: My booking number is 101, first name Jack, last name Bauer
AI: [Shows booking details with check-in, check-out dates, room type, etc.]
Example 2: Change Room Type
You: I want to upgrade my room to a suite for booking 102, Chloe O'Brian
AI: [Checks terms and conditions via RAG]
I can help you upgrade your room. According to our terms, room upgrades
are subject to availability and may incur additional charges of $50 per
night. Would you like to proceed?
You: Yes, please proceed
AI: Your room has been successfully upgraded to SUITE.
Example 3: Cancel Booking
You: Cancel booking 103 for Kim Bauer
AI: I've successfully cancelled your booking. You should receive a
confirmation email shortly.
- The AI will remember context throughout the conversation
- You can ask about hotel policies and terms at any time
- The AI will proactively verify your identity before making changes
- Responses stream in real-time for a natural chat experience
# Standard build
docker build -t hotel-booking-ai:latest .
# With commercial Vaadin license (if applicable)
docker build --secret id=proKey,src=$HOME/.vaadin/proKey \
-t hotel-booking-ai:latest .docker run -p 8080:8080 \
-e OPENAI_API_KEY=your-api-key \
hotel-booking-ai:latestCreate a docker-compose.yml:
version: '3.8'
services:
hotel-booking-ai:
build: .
ports:
- "8080:8080"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- SPRING_PROFILES_ACTIVE=productionRun with:
docker-compose upThis application provides a solid foundation for an AI-powered customer support system. Here are potential enhancements to consider:
- Multi-language Support: Detect customer language and respond accordingly
- Sentiment Analysis: Detect frustrated customers and escalate to human agents
- Voice Interface: Add speech-to-text and text-to-speech capabilities
- AI Model Selection: Support multiple AI providers (Azure OpenAI, Claude, Gemini)
- ✅ H2 Database Integration: Local file-backed database storage in
./store/data/ - ✅ Vector Store Persistence: File-backed storage in
./store/rag/ - Redis Cache: Add a caching layer for frequently accessed bookings
- ✅ Chat History Storage: Persist conversations for analytics and training
- Vector Database: Use Pinecone, Weaviate, or pgvector for better RAG performance
- OAuth2/OIDC: Integrate with external identity providers (Google, Microsoft)
- JWT Tokens: Secure API endpoints
- Role-Based Access: Add an admin panel for support agents
- Rate Limiting: Prevent API abuse
- API Key Rotation: Secure OpenAI key management with HashiCorp Vault
- Real Hotel APIs: Integrate with actual booking systems (Amadeus, Sabre)
- Payment Gateway: Add Stripe/PayPal for booking modifications
- Email Notifications: Send confirmation emails via SendGrid/AWS SES
- Calendar Integration: Add to Google Calendar/Outlook
- Multi-tenant Support: Support multiple hotel chains
- Agent Handoff: Seamlessly transfer complex issues to human agents
- Booking Recommendations: AI-powered upselling and cross-selling
- Predictive Analytics: Forecast customer needs based on conversation patterns
- A/B Testing: Test different AI prompt strategies
- Application Performance Monitoring: Integrate New Relic, DataDog, or Dynatrace
- Distributed Tracing: Add OpenTelemetry for microservices readiness
- AI Token Usage Tracking: Monitor and optimize OpenAI API costs
- User Analytics: Track conversation success rates and user satisfaction
- Error Tracking: Sentry or Rollbar integration
- Mobile App: Native iOS/Android applications
- Rich Media: Support image uploads (booking confirmations, IDs)
- Typing Indicators: Show when AI is processing
- Quick Replies: Suggest common actions as buttons
- Conversation Export: Allow users to download chat transcripts
- Integration Tests: Comprehensive API testing with WireMock for OpenAI
- E2E Tests: Playwright/Cypress for frontend testing
- Load Testing: JMeter/Gatling for performance validation
- AI Response Testing: Validate AI outputs against expected behaviors
- Chaos Engineering: Test resilience with chaos mesh
- CI/CD Pipeline: GitHub Actions, GitLab CI, or Jenkins
- Kubernetes Deployment: Helm charts for orchestration
- Infrastructure as Code: Terraform for cloud resources
- Blue-Green Deployments: Zero-downtime releases
- Auto-scaling: Horizontal pod autoscaling based on traffic
- GDPR Compliance: Data deletion and export capabilities
- Audit Logging: Track all booking modifications
- Data Anonymization: Mask sensitive information in logs
- Terms Acceptance: Track customer consent for terms of service
- Admin Dashboard: Vaadin admin panel for viewing all bookings
- Reporting: Generate booking trends and AI usage reports
- Customer Insights: Analyze common support requests
- Cost Analysis: Track AI API usage and optimize spending
Feel free to fork this repository and implement any of these features. Pull requests are welcome!
This project is licensed under the MIT License – see the LICENSE file for details.
Mohamed Taman
- Email: mohamed.taman@gmail.com
- Role: Solutions Architect & Java Software Architect
- GitHub: @mohamed-taman
Built with ❤️ using Spring AI and Vaadin