January 25, 2025

Enhancing Logging and Observability in Distributed Systems: Future-Proof Strategies for Developers

In the era of distributed systems and microservices architectures, effective logging and observability are essential for building resilient and maintainable applications. As systems become more complex, traditional logging methods often fall short in providing the necessary insights. This post explores advanced concepts and best practices that not only enhance logging and observability but also future-proof your applications, making development more efficient and effective.


1. Distributed Tracing with OpenTelemetry

Distributed tracing allows you to monitor requests as they traverse through various services in a distributed system. OpenTelemetry provides a unified set of APIs, libraries, agents, and instrumentation to enable observability across applications. By implementing OpenTelemetry, you can collect traces, metrics, and logs in a standardized format, facilitating seamless integration with various backends.

Why It Matters:

Distributed tracing offers end-to-end visibility into request flows, helping identify performance bottlenecks and failures across services. This comprehensive view is crucial for maintaining system reliability and performance.

Example Use Case:

In a microservices-based e-commerce platform, OpenTelemetry can trace a user's journey from browsing products to completing a purchase, providing insights into each service's performance involved in the transaction.

Example Code:

Here's how you can set up OpenTelemetry for tracing in a Spring Boot application:

  1. Add dependencies to pom.xml:
<dependency>
    <groupId>io.opentelemetry</groupId>
    <artifactId>opentelemetry-api</artifactId>
    <version>1.6.0</version>
</dependency>
<dependency>
    <groupId>io.opentelemetry</groupId>
    <artifactId>opentelemetry-sdk</artifactId>
    <version>1.6.0</version>
</dependency>
<dependency>
    <groupId>io.opentelemetry</groupId>
    <artifactId>opentelemetry-exporter-otlp</artifactId>
    <version>1.6.0</version>
</dependency>
  1. Configure tracing in your Spring Boot application:
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.api.trace.Tracer;
import io.opentelemetry.context.Scope;
import org.springframework.stereotype.Service;

@Service
public class OrderService {

    private static final Tracer tracer = OpenTelemetry.getGlobalTracer("com.example.orders");

    public void processOrder(String orderId) {
        Span span = tracer.spanBuilder("processOrder").startSpan();
        try (Scope scope = span.makeCurrent()) {
            // Process order logic
            // For example, communicate with other services, validate payment, etc.
        } finally {
            span.end();
        }
    }
}

2. Centralized Log Management with Open-Source Tools

Centralizing logs from various services into a single platform enhances the ability to monitor, search, and analyze log data effectively. Tools like VictoriaLogs, an open-source log management solution, are designed for high-performance log analysis, enabling efficient processing and visualization of large volumes of log data.

Why It Matters:

Centralized log management simplifies troubleshooting by providing a unified view of logs, making it easier to correlate events across services and identify issues promptly.

Example Use Case:

Using VictoriaLogs, a development team can aggregate logs from all microservices in a platform, allowing for quick identification of errors or performance issues in the system.

Example Code:

You can configure logging in Spring Boot with Logback to send logs to a centralized logging system like ELK (Elasticsearch, Logstash, Kibana):

  1. Add Logback configuration in src/main/resources/logback-spring.xml:
<configuration>
    <appender name="ELASTICSEARCH" class="ch.qos.logback.classic.net.SocketAppender">
        <remoteHost>localhost</remoteHost>
        <port>5044</port>
        <encoder>
            <pattern>%d{ISO8601} %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <root level="INFO">
        <appender-ref ref="ELASTICSEARCH" />
    </root>
</configuration>

3. Standardized Logging Formats

Adopting standardized logging formats, such as JSON, ensures consistency and facilitates the parsing and analysis of log data. Standardized data formats significantly improve observability by making data easily ingested and parsed.

Why It Matters:

Standardized logs are easier to process and analyze, enabling automated tools to extract meaningful insights and reducing the time required to correlate issues with specific code changes.

Example Code:

Here's how to configure Spring Boot to log in JSON format using Logback:

  1. Update logback-spring.xml to log in JSON format:
<configuration>
    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>
                {
                    "timestamp": "%date{ISO8601}",
                    "level": "%level",
                    "logger": "%logger",
                    "message": "%message",
                    "thread": "%thread"
                }
            </pattern>
        </encoder>
    </appender>

    <root level="INFO">
        <appender-ref ref="CONSOLE" />
    </root>
</configuration>

4. Implementing Correlation IDs

Correlation IDs are unique identifiers assigned to user requests, allowing logs from different services to be linked together. This practice is essential for tracing the lifecycle of a request across multiple services.

Why It Matters:

Correlation IDs enable end-to-end tracing of requests, making it easier to diagnose issues that span multiple services and improving the overall observability of the system.

Example Code:

Here’s an example of how you can generate and pass a Correlation ID through microservices using Spring Boot:

  1. Create a filter to extract or generate the correlation ID:
import org.springframework.stereotype.Component;
import org.springframework.web.filter.OncePerRequestFilter;
import org.springframework.web.util.WebUtils;

import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import java.io.IOException;
import java.util.UUID;

@Component
public class CorrelationIdFilter extends OncePerRequestFilter {

    @Override
    protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException {
        String correlationId = request.getHeader("X-Correlation-Id");
        if (correlationId == null) {
            correlationId = UUID.randomUUID().toString();
        }
        response.setHeader("X-Correlation-Id", correlationId);
        filterChain.doFilter(request, response);
    }
}
  1. Use the correlation ID in your logging:
import org.slf4j.MDC;
import org.springframework.stereotype.Service;

@Service
public class OrderService {

    public void processOrder(String orderId) {
        MDC.put("correlationId", UUID.randomUUID().toString());  // Set correlation ID for logging
        try {
            // Process the order
            LOGGER.info("Processing order: {}", orderId);
        } finally {
            MDC.clear();  // Clean up MDC after request is processed
        }
    }
}

5. Leveraging Cloud-Native Observability Platforms

Cloud-native observability platforms offer integrated solutions for monitoring, logging, and tracing, designed to work seamlessly with cloud environments. These platforms provide scalability, flexibility, and ease of integration with various cloud services.

Why It Matters:

Cloud-native platforms are optimized for dynamic and scalable environments, providing real-time insights and reducing the operational overhead associated with managing observability tools.

Example Code:

If you're using a platform like AWS CloudWatch, you can use the AWS SDK to push custom logs:

import software.amazon.awssdk.services.cloudwatchlogs.CloudWatchLogsClient;
import software.amazon.awssdk.services.cloudwatchlogs.model.*;

public class CloudWatchLogging {

    private final CloudWatchLogsClient cloudWatchLogsClient = CloudWatchLogsClient.create();

    public void logToCloudWatch(String message) {
        PutLogEventsRequest logRequest = PutLogEventsRequest.builder()
            .logGroupName("MyLogGroup")
            .logStreamName("MyLogStream")
            .logEvents(LogEvent.builder().message(message).timestamp(System.currentTimeMillis()).build())
            .build();
        cloudWatchLogsClient.putLogEvents(logRequest);
    }
}

Conclusion

Implementing advanced logging and observability practices is crucial for building resilient and maintainable distributed systems. By adopting distributed tracing, centralized log management, standardized logging formats, correlation IDs, cloud-native observability platforms, and real-time monitoring, developers can enhance system reliability and streamline the development process. These practices not only improve current system observability but also future-proof applications, ensuring they can adapt to evolving technologies and requirements.

By following these practices, developers will not only enhance system reliability and performance but also ensure that they can quickly identify, troubleshoot, and resolve issues in complex distributed systems.

Unveiling MDC (Mapped Diagnostic Context): A Comprehensive Guide to Contextual Logging in Spring Boot


In the world of software development, logging has become an indispensable tool for understanding and debugging the flow of an application. Logs provide critical insights into system behavior, but as applications become more complex, logs can quickly become overwhelming and difficult to interpret. This is where MDC (Mapped Diagnostic Context) comes into play, offering a powerful mechanism to add contextual information to your logs.

In this comprehensive guide, we will explore the evolution of MDC, its need in modern systems, how it works internally, how to use it in Spring Boot, best practices for utilizing MDC, and provide a practical example with code snippets.


What is MDC (Mapped Diagnostic Context)?

MDC (Mapped Diagnostic Context) is a feature provided by modern logging frameworks such as SLF4J, Logback, and Log4j2 that allows developers to enrich log entries with contextual data. This data is typically stored as key-value pairs and is automatically included in every log entry generated by the current thread, offering deeper insights into the system’s behavior.

The MDC can store a wide variety of context-specific data, such as:

  • User IDs
  • Transaction IDs
  • Request IDs
  • Session Information
  • Thread IDs

By associating this contextual data with log entries, MDC helps to trace the flow of events through the system and simplifies debugging and troubleshooting.


The Evolution of MDC: From Log4j to Modern Frameworks

MDC was first introduced in Log4j to address a common issue: logging systems often lack the context necessary to understand the events leading to a particular log message. Log entries were disconnected, making it challenging to trace the flow of execution or correlate events in distributed systems.

With the advent of SLF4J as a logging facade and Logback as its reference implementation, MDC was integrated into modern logging frameworks, expanding its utility across various types of applications—especially those running in distributed or multi-threaded environments.

The adoption of MDC continues to grow, particularly in microservices architectures where the need for consistent and contextual logging is paramount. By preserving context information across multiple services, MDC simplifies debugging and enhances observability.


Why is MDC Needed?

In modern applications, especially those built using microservices or multi-threaded systems, the ability to trace the execution flow of requests and correlate logs across different components is crucial. Here's why MDC is needed:

1. Contextual Logging

Logs without context can be meaningless. MDC allows you to enrich your logs with important contextual information. For instance, knowing which user or transaction the log entry is related to can significantly simplify debugging.

2. Distributed Systems and Request Tracing

In microservices-based applications, a single user request often traverses multiple services. Without a unique identifier (like a request ID) propagated across services, logs from different services can become disconnected. MDC allows the same request ID to be passed along, linking logs across services and making it easier to trace the complete lifecycle of a request.

3. Simplifying Debugging

MDC enables you to automatically include useful data in your logs, reducing the need for manual effort. For example, it can automatically append a user ID to logs for every request, making it easier to track user-related issues without needing to modify individual log statements.

4. Thread-Specific Context

MDC operates on a per-thread basis, ensuring that each thread has its own context. In multi-threaded or asynchronous applications, MDC maintains the context independently for each thread, preventing data contamination between threads.


What Operations Can You Perform with MDC?

MDC provides several important operations that make it flexible and powerful for logging in complex applications:

1. Add Context to Logs

You can use the MDC.put("key", "value") method to store diagnostic data that will be included in subsequent log messages. This data will be available across all logging statements within the same thread.

2. Access Context in Logs

Logging frameworks like Logback and SLF4J support MDC natively. You can access the MDC data in your log format using the %X{key} placeholder. This will include the value associated with the key in the log output.

3. Remove Context

Once a log entry with specific context is generated, it's good practice to remove the context using MDC.remove("key") to prevent memory leaks, especially in long-running applications. You can also remove all context with MDC.clear() if necessary.

4. Clear Context After Use

Always clear the MDC context after its use to prevent stale data from leaking into other requests or threads. For example, in web applications, MDC data should be cleared at the end of the request lifecycle.


Why is it Called "Mapped Diagnostic Context"?

The term "Mapped Diagnostic Context" refers to the fact that MDC stores contextual data as a map of key-value pairs. This map holds diagnostic information specific to a particular context (like a thread or request), allowing logs to carry this context across various layers of the application. The diagnostic context aspect refers to the role this data plays in diagnosing issues and troubleshooting problems.


How MDC Works Internally

MDC operates on a per-thread basis, meaning each thread can have its own unique diagnostic context. The underlying mechanism is based on ThreadLocal, a feature of Java that allows variables to be stored on a per-thread basis. This ensures that each thread maintains its own MDC context, independent of other threads.

When a new thread is created or a new request is handled, MDC can automatically associate a set of context data with that thread. As long as the thread is executing, it can use the MDC to enrich its logs with context-specific information. Once the thread finishes its work, the MDC data is cleared to prevent memory leaks.

Flow of MDC in a Request Path

Imagine a scenario where an e-commerce application has a Payment Service, Order Service, and Inventory Service, and a user request is processed sequentially across these services. The transaction ID is added to MDC in the Order Service and is passed along with the request to the other services. This creates a continuous trace of logs that are related to the same transaction, even if the services are running on separate machines.

  1. Step 1: The user makes a request to the Order Service to place an order.
  2. Step 2: The Order Service generates a transaction ID and adds it to the MDC (MDC.put("transactionId", "12345")).
  3. Step 3: The Order Service calls the Payment Service.
  4. Step 4: The Payment Service accesses the transaction ID from MDC and logs relevant information related to the payment (%X{transactionId}).
  5. Step 5: After the payment is successful, the Order Service calls the Inventory Service.
  6. Step 6: The Inventory Service also logs its actions using the same transaction ID.

At the end of the process, all logs related to this specific transaction across different services are enriched with the same transaction ID, making it easy to trace the path of the request.


Code Example: Using MDC in Spring Boot

Step 1: Add Dependencies

If you’re using Logback (default in Spring Boot), you don’t need to add any additional dependencies. If you prefer Log4j2, you can include the following dependency in your pom.xml:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-log4j2</artifactId>
</dependency>

Step 2: Create a Filter for Adding MDC to Requests

import org.slf4j.MDC;
import org.springframework.stereotype.Component;
import org.springframework.web.filter.OncePerRequestFilter;

import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import java.io.IOException;
import java.util.UUID;

@Component
public class MdcFilter extends OncePerRequestFilter {

    @Override
    protected void doFilterInternal(ServletRequest request, ServletResponse response, FilterChain filterChain)
            throws ServletException, IOException {
        // Generate a unique transaction ID for the request
        String transactionId = UUID.randomUUID().toString();
        
        // Add the transaction ID to MDC
        MDC.put("transactionId", transactionId);
        
        try {
            // Proceed with the request
            filterChain.doFilter(request, response);
        } finally {
            // Clean up MDC to avoid memory leaks
            MDC.remove("transactionId");
        }
    }
}

Step 3: Configure Logback to Log MDC Data

In your logback-spring.xml file, configure the log format to include the transaction ID:

<configuration>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss} - %X{transactionId} - %msg%n</pattern>
        </encoder>
    </appender>

    <root level="INFO">
        <appender-ref ref="STDOUT"/>
    </root>
</configuration>

Step 4: Use MDC in Your Service

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.slf4j.MDC;
import org.springframework.stereotype.Service;

@Service
public class OrderService {

    private static final Logger logger = LoggerFactory.getLogger(OrderService.class);

    public void createOrder(String userId) {
        // Set MDC value
        MDC.put("userId", userId);
        
        // Log order creation
        logger.info("Order created for user");
        
        // Simulate order processing logic
        // MDC.remove("userId");
    }
}

Best Practices for Using MDC in Modern Applications

While MDC is a powerful tool, it’s important to follow some best practices to ensure optimal performance and avoid pitfalls:

1. Use MDC for Contextual Data Only

MDC should be used to store contextual data that is relevant to the current thread or request. Avoid using it for storing application-wide settings or data that is not tied to a specific context.

2. Propagate Context Across Services

In a microservices environment, propagate MDC data (such as a transaction ID) across services to correlate logs. You can use HTTP headers or messaging queues to pass MDC data between services.

3. Clean Up MDC Context

Always clean up MDC after the context is no longer needed. Use MDC.remove() or MDC.clear() to prevent memory leaks. In Spring Boot applications, use a filter or interceptor to clean up the MDC context at the end of a request.

4. Avoid Overloading MDC

MDC is not meant to hold large or sensitive data. Use it for small, lightweight, and non-sensitive contextual information, such as request IDs or user IDs

.


Conclusion

MDC is a crucial tool for improving the quality of logs and simplifying debugging, especially in multi-threaded and distributed systems. By associating context-specific data with log entries, MDC enhances traceability, observability, and debugging efficiency.

By following the steps and practices outlined above, you can harness the full power of MDC in your Spring Boot applications, ensuring that logs are not just a collection of messages but a comprehensive, contextual record of application activity.

Mastering Spring Boot: Advanced Interview Questions to Showcase Your Expertise

Spring Boot is the cornerstone of modern Java development, empowering developers to create scalable, production-ready applications with ease. If you’re preparing for an advanced Spring Boot interview, expect deep-dives into internals, scenario-based challenges, and intricate real-world problems. Here’s a guide to questions designed to showcase your expertise and help you stand out.


Why Spring Boot Interviews Are Challenging

Spring Boot simplifies application development, but understanding its internal mechanics and applying that knowledge in complex scenarios separates experienced developers from beginners. Advanced interviews often probe:

  • In-depth understanding of Spring Boot internals.
  • Ability to handle complex real-world scenarios.
  • Problem-solving skills under constraints.
  • Awareness of design trade-offs and best practices.

Expert-Level Scenario-Based Questions and Answers

1. Customizing Auto-Configuration for Legacy Systems

Scenario: Your company uses a legacy logging library incompatible with Spring Boot’s default logging setup. How would you replace the default logging configuration?

Answer:

  1. Exclude Default Logging: Use @SpringBootApplication(exclude = LoggingAutoConfiguration.class).
  2. Create Custom Configuration: Define a @Configuration class and register your logging beans:
    @Configuration
    @ConditionalOnClass(CustomLogger.class)
    public class CustomLoggingConfig {
        @Bean
        public Logger customLogger() {
            return new CustomLogger();
        }
    }
    
  3. Register in spring.factories: Add the class to META-INF/spring.factories under EnableAutoConfiguration.
  4. Test Integration: Validate integration and ensure logs meet expectations.

2. Multi-Tenant Architecture

Scenario: You’re building a multi-tenant SaaS application. Each tenant requires a separate database. How would you implement this in Spring Boot?

Answer:

  1. Database Routing:
    • Implement AbstractRoutingDataSource to switch the DataSource dynamically based on tenant context.
    public class TenantRoutingDataSource extends AbstractRoutingDataSource {
        @Override
        protected Object determineCurrentLookupKey() {
            return TenantContext.getCurrentTenant();
        }
    }
    
  2. Tenant Context:
    • Use ThreadLocal or a filter to set tenant-specific context.
  3. Configuration:
    • Define multiple DataSource beans and configure Hibernate to work with the routed DataSource.
    @Configuration
    public class DataSourceConfig {
        @Bean
        public DataSource tenantDataSource() {
            TenantRoutingDataSource dataSource = new TenantRoutingDataSource();
            Map<Object, Object> tenantDataSources = new HashMap<>();
            tenantDataSources.put("tenant1", dataSourceForTenant1());
            tenantDataSources.put("tenant2", dataSourceForTenant2());
            dataSource.setTargetDataSources(tenantDataSources);
            return dataSource;
        }
    
        private DataSource dataSourceForTenant1() {
            return DataSourceBuilder.create().url("jdbc:mysql://tenant1-db").build();
        }
    
        private DataSource dataSourceForTenant2() {
            return DataSourceBuilder.create().url("jdbc:mysql://tenant2-db").build();
        }
    }
    
  4. Challenges: Address schema versioning and cross-tenant operations.

3. Circular Dependency Resolution

Scenario: Two services in your application depend on each other for initialization, causing a circular dependency. How would you resolve this without refactoring the services?

Answer:

  1. Use @Lazy Initialization: Annotate one or both beans with @Lazy to delay their creation.
  2. Use ObjectProvider: Inject dependencies dynamically:
    @Service
    public class ServiceA {
        private final ObjectProvider<ServiceB> serviceBProvider;
    
        public ServiceA(ObjectProvider<ServiceB> serviceBProvider) {
            this.serviceBProvider = serviceBProvider;
        }
    
        public void execute() {
            serviceBProvider.getIfAvailable().performTask();
        }
    }
    
  3. Event-Driven Design:
    • Use ApplicationEvent to decouple service initialization.

4. Zero-Downtime Deployments

Scenario: Your Spring Boot application is deployed in Kubernetes. How do you ensure zero downtime during rolling updates?

Answer:

  1. Readiness and Liveness Probes: Configure Kubernetes probes:
    readinessProbe:
      httpGet:
        path: /actuator/health
        port: 8080
    livenessProbe:
      httpGet:
        path: /actuator/health
        port: 8080
    
  2. Graceful Shutdown: Implement @PreDestroy to handle in-flight requests before shutting down:
    @RestController
    public class GracefulShutdownController {
        private final ExecutorService executorService = Executors.newFixedThreadPool(10);
    
        @PreDestroy
        public void onShutdown() {
            executorService.shutdown();
            try {
                if (!executorService.awaitTermination(30, TimeUnit.SECONDS)) {
                    executorService.shutdownNow();
                }
            } catch (InterruptedException e) {
                executorService.shutdownNow();
            }
        }
    }
    
  3. Session Stickiness: Configure the load balancer to keep users on the same instance during updates.

5. Debugging Memory Leaks

Scenario: Your Spring Boot application experiences memory leaks under high load in production. How do you identify and fix the issue?

Answer:

  1. Heap Dump Analysis:
    • Enable heap dumps with -XX:+HeapDumpOnOutOfMemoryError.
    • Use tools like Eclipse MAT to analyze memory usage.
  2. Profiling:
    • Use profilers (YourKit, JProfiler) to identify memory hotspots.
  3. Fix Leaks:
    • Address common culprits like improper use of ThreadLocal or caching mechanisms.
    @Service
    public class CacheService {
        private final Map<String, Object> cache = new ConcurrentHashMap<>();
    
        public void clearCache() {
            cache.clear();
        }
    }
    

6. Advanced Security: Custom Token Introspection

Scenario: You need to secure an application using OAuth 2.0 but require custom token introspection. How would you implement this?

Answer:

  1. Override Default Introspector: Implement OpaqueTokenIntrospector:
    @Component
    public class CustomTokenIntrospector implements OpaqueTokenIntrospector {
        @Override
        public OAuth2AuthenticatedPrincipal introspect(String token) {
            // Custom logic to validate and parse the token
            return new DefaultOAuth2AuthenticatedPrincipal(attributes, authorities);
        }
    }
    
  2. Register in Security Configuration:
    @Configuration
    public class SecurityConfig extends WebSecurityConfigurerAdapter {
        @Bean
        public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
            http.oauth2ResourceServer().opaqueToken().introspector(new CustomTokenIntrospector());
            return http.build();
        }
    }
    

Why Mastering Spring Boot Matters

  1. Increased Productivity: Spring Boot’s auto-configuration and embedded server reduce boilerplate code, letting you focus on business logic.

  2. Scalability: Features like actuator metrics, health checks, and integration with Kubernetes make it ideal for large-scale applications.

  3. Community and Ecosystem: A vast library of integrations and strong community support make Spring Boot a robust choice for enterprise development.

  4. Future-Proof: Regular updates, compatibility with cloud-native architectures, and strong adoption in microservices ensure longevity.


Where to Learn More

  1. Official Documentation:

  2. Books:

    • Spring Microservices in Action by John Carnell.
    • Cloud Native Java by Josh Long.
  3. Online Courses:

    • Udemy, Pluralsight, and Baeldung’s advanced Spring Boot courses.
  4. Track Updates:


Mastering these advanced questions and scenarios ensures you’re prepared to tackle even the most challenging Spring Boot interview. It’s not just about answering questions but demonstrating an in-depth understanding of concepts and practical problem-solving skills.

Good luck on your journey to becoming a Spring Boot expert!

Jakarta EE: Unlocking Enterprise Power with Java SE as Its Foundation

 

Jakarta EE, formerly known as Java EE, is the gold standard for enterprise-grade application development. Built on the robust foundation of Java SE, Jakarta EE extends its capabilities to address the demands of enterprise systems—scalability, security, and support for modern web technologies.

In this blog, we’ll explore why Jakarta EE came into existence, its features, how it links with Spring Boot, a practical standalone example, and how to stay updated with its ecosystem.


Why Was Jakarta EE Introduced?

The transition from Java EE to Jakarta EE wasn’t just a rebranding effort; it was a strategic move to ensure the evolution of enterprise Java. Here’s why Jakarta EE came into existence:

  1. The End of Oracle's Stewardship:
    Oracle decided to transfer Java EE to the Eclipse Foundation, enabling open collaboration and innovation without corporate restrictions.

  2. Legal Restrictions on javax.*:
    Oracle retained rights to the javax.* namespace, necessitating the shift to the jakarta.* namespace.

  3. Cloud-Native and Modernization Needs:
    The rise of microservices, cloud-native architectures, and lightweight runtimes drove the need for a more modern enterprise framework.

  4. Faster Release Cycles:
    Jakarta EE adopted a more agile and community-driven development process, enabling quicker updates compared to Java EE.

  5. Community Ownership:
    Moving to the Eclipse Foundation empowered a vibrant community of developers and organizations like Red Hat, IBM, and Payara to contribute to its growth.


Key Features Introduced by Jakarta EE

Jakarta EE is packed with features tailored for modern enterprise application development:

  • Cloud-Native Ready: Optimized for containerized and serverless deployments.
  • Microservices-Friendly: Seamless integration with Jakarta MicroProfile.
  • Enhanced APIs: Advanced versions of existing APIs, such as Jakarta RESTful Web Services and Jakarta Persistence.
  • Backward Compatibility: Easy migration for Java EE projects.
  • Lightweight Runtimes: Reduced overhead for modern architectures.

Why Should You Use Jakarta EE?

1. Enterprise-Ready Features

Full-stack enterprise-grade specifications like JPA, JMS, CDI, and JTA ensure scalability, security, and robustness.

2. Vendor Neutrality

Jakarta EE runs on any compatible implementation (e.g., Payara, WildFly, Open Liberty), avoiding vendor lock-in.

3. Cloud and Microservices Support

Features like REST APIs, lightweight deployments, and support for Kubernetes and Docker make it ideal for cloud-native applications.

4. Open-Source and Community-Driven

Jakarta EE thrives on contributions from a global community, ensuring constant innovation.

5. Integration with Modern Tools

Jakarta EE integrates with Spring Boot, enabling developers to combine its enterprise power with Spring's lightweight development.

6. Future-Proof

Continuous updates and alignment with modern trends make Jakarta EE a long-term choice for enterprise applications.


Standalone Example: Building a REST API with Jakarta EE

Let’s demonstrate Jakarta EE’s simplicity by building a basic REST API.

Dependencies:

xml
<dependency> <groupId>jakarta.platform</groupId> <artifactId>jakarta.jakartaee-api</artifactId> <version>10.0.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.h2database</groupId> <artifactId>h2</artifactId> <version>2.2.220</version> </dependency>

Entity Class:

java
import jakarta.persistence.Entity; import jakarta.persistence.Id; @Entity public class Product { @Id private Long id; private String name; private Double price; // Getters and setters... }

REST Resource:

java
import jakarta.ws.rs.*; import jakarta.ws.rs.core.MediaType; import jakarta.persistence.EntityManager; import jakarta.persistence.Persistence; import java.util.List; @Path("/products") @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public class ProductResource { private EntityManager em = Persistence.createEntityManagerFactory("default").createEntityManager(); @GET public List<Product> getProducts() { return em.createQuery("SELECT p FROM Product p", Product.class).getResultList(); } @POST public String addProduct(Product product) { em.getTransaction().begin(); em.persist(product); em.getTransaction().commit(); return "Product added!"; } }

Where to Learn More About Jakarta EE

  1. Official Website
    Visit the official Jakarta EE website: https://jakarta.ee/
    It offers documentation, guides, and an overview of features.

  2. Eclipse Foundation Blog
    Follow the Eclipse Foundation’s blog for insights and announcements: Eclipse Blog.

  3. Jakarta EE on GitHub
    Explore Jakarta EE’s open-source repositories: GitHub Repositories.

  4. Books and Courses

    • Jakarta EE Cookbook by Elder Moraes
    • Practical Enterprise Application Development with Jakarta EE by Otavio Santana
    • Online courses on Udemy, Pluralsight, and Coursera.
  5. YouTube Channels
    Look for channels like Jakarta EE Tutorials and Eclipse Foundation for video tutorials.


How to Track Changes and Upcoming Releases

  1. Release Notes:
    Each release has detailed notes on updates, fixes, and new features. Access them on Jakarta EE Downloads.

  2. Jakarta EE Specifications:
    Stay updated with the latest specifications and RFCs: Jakarta Specifications.

  3. Join the Community:

  4. Follow Jakarta EE Working Group:
    Get updates directly from the Jakarta EE working group at Eclipse Jakarta EE Working Group.

  5. Social Media:
    Follow Jakarta EE on Twitter and LinkedIn for the latest updates.


Conclusion

Jakarta EE ensures the continuity and modernization of enterprise Java by embracing open standards, cloud-native architectures, and faster innovation cycles. Its seamless integration with Java SE and frameworks like Spring Boot, combined with robust features, make it the ideal choice for building enterprise-grade applications.

By following its evolving ecosystem and mastering its core topics, you’ll stay ahead of the curve in enterprise development. Begin your journey with Jakarta EE today to unlock the full potential of modern enterprise Java!

Understanding CommonClientSessionModel in Keycloak

 The CommonClientSessionModel interface in Keycloak serves as a foundational component for managing client sessions. It provides methods and enums to handle session-specific attributes, actions, and execution states. Understanding and leveraging this interface is essential for building robust, secure, and dynamic authentication flows.


Key Features of CommonClientSessionModel

  1. Session Management: Provides methods to manage session-specific data like redirect URIs, actions, and protocols.
  2. Enums for Actions and Status: Defines enums such as Action and ExecutionStatus to standardize session behaviors and outcomes.
  3. Integration with Realms and Clients: Tightly coupled with RealmModel and ClientModel, enabling seamless integration into the Keycloak ecosystem.

Methods in CommonClientSessionModel

1. getRedirectUri and setRedirectUri

  • Purpose: Manage the URI to which the user is redirected after authentication.
  • Example:
sessionModel.setRedirectUri("https://example.com/callback");
String redirectUri = sessionModel.getRedirectUri();

2. getAction and setAction

  • Purpose: Define and retrieve the current action for the session.
  • Example:
sessionModel.setAction(CommonClientSessionModel.Action.AUTHENTICATE.name());
String action = sessionModel.getAction();

3. getProtocol and setProtocol

  • Purpose: Specify the protocol used in the session (e.g., OpenID Connect, SAML).
  • Example:
sessionModel.setProtocol("openid-connect");
String protocol = sessionModel.getProtocol();

4. getRealm and getClient

  • Purpose: Retrieve the associated realm and client models.
  • Example:
RealmModel realm = sessionModel.getRealm();
ClientModel client = sessionModel.getClient();

Enums in CommonClientSessionModel

1. Action

Defines actions that can be taken during a session. Examples include:

  • OAUTH_GRANT: Handles OAuth grants.
  • AUTHENTICATE: Initiates user authentication.
  • LOGGED_OUT: Represents a logged-out state.
  • USER_CODE_VERIFICATION: Used for verifying user codes in device flows.

Example: When to Use USER_CODE_VERIFICATION

  • Scenario: Implementing a device authorization grant flow where users enter a code to verify their identity.
  • Implementation:
sessionModel.setAction(CommonClientSessionModel.Action.USER_CODE_VERIFICATION.name());
// Handle user code verification logic here

2. ExecutionStatus

Defines the status of an authentication execution. Examples include:

  • FAILED: Execution failed.
  • SUCCESS: Execution succeeded.
  • CHALLENGED: A challenge was presented to the user.
  • EVALUATED_TRUE / EVALUATED_FALSE: Used for conditional checks.

Changing ExecutionStatus

To set or change the execution status, you can update the associated execution logic.

Example: Changing ExecutionStatus

AuthenticationExecutionModel execution = getExecution(realm, "execution-id");
execution.setExecutionStatus(ExecutionStatus.SUCCESS);
saveExecution(execution);

When to Use CommonClientSessionModel

1. Action-Specific Flows

Use setAction to dynamically assign actions based on the authentication flow. For example:

  • AUTHENTICATE: Used for standard login flows.
  • USER_CODE_VERIFICATION: Ideal for device login scenarios.

2. Tracking and Debugging Execution States

Leverage ExecutionStatus to monitor and debug authentication flows. For example:

  • Track failed executions and log error details.
  • Use CHALLENGED to understand where users faced challenges in the flow.

3. Protocol-Specific Implementations

Use getProtocol and setProtocol to differentiate between OIDC and SAML flows. This is especially useful in multi-protocol environments.


Best Practices

  1. State Management:

    • Ensure actions and execution statuses are updated correctly to prevent inconsistencies.
    • Use enums like ExecutionStatus to standardize status updates.
  2. Security:

    • Avoid storing sensitive data directly in session attributes.
    • Regularly validate session attributes to prevent unauthorized modifications.
  3. Error Handling:

    • Log and track execution failures using ExecutionStatus.
    • Provide meaningful error messages to guide users through challenges.
  4. Documentation:

    • Maintain detailed documentation for custom actions and execution flows.
  5. Testing:

    • Test all possible execution statuses to ensure robust handling of edge cases.

Additional Reading

By leveraging CommonClientSessionModel, you can build highly customizable and secure authentication flows tailored to your application's needs.

Understanding Keycloak's AuthenticationProcessor: Dynamic Authentication Flows

In Keycloak, the AuthenticationProcessor class is a critical component that allows you to dynamically handle custom authentication flows. It provides the ability to define, execute, and manipulate authentication logic programmatically. This flexibility enables advanced use cases such as dynamic user authentication, condition-based flow adjustments, and seamless integration with external systems.

This guide dives into the AuthenticationProcessor, its use cases, benefits, and a detailed explanation of its API with examples.


What is AuthenticationProcessor?

The AuthenticationProcessor is a core Keycloak utility that manages authentication flows by:

  1. Binding Contextual Information: It combines session, user, realm, and connection details into a cohesive authentication context.
  2. Executing Flows Dynamically: You can use it to execute specific authentication steps or entire flows.
  3. Handling State: It integrates seamlessly with the AuthenticationSessionModel to persist state across authentication steps.

Anatomy of the AuthenticationProcessor API

Here is a basic example of configuring an AuthenticationProcessor instance:

AuthenticationProcessor processor = new AuthenticationProcessor();
processor.setAuthenticationSession(authSession)
         .setFlowId(flowId)
         .setConnection(clientConnection)
         .setEventBuilder(event)
         .setRealm(realm)
         .setSession(session)
         .setUriInfo(session.getContext().getUri())
         .setRequest(session.getContext().getHttpRequest());

// Set the authenticated user in the session
authSession.setAuthenticatedUser(user);

// Trigger the authentication flow
Response challenge = processor.authenticateOnly();

Key Methods in AuthenticationProcessor

  1. setAuthenticationSession

    • Associates the authentication session with the processor.
    • Purpose: Persist state across the flow (e.g., OTPs, user attributes).
  2. setFlowId

    • Defines the ID of the authentication flow to execute.
    • Purpose: Enables switching between different flows dynamically.
  3. setConnection

    • Sets the client connection for the session.
    • Purpose: Provides information about the incoming request (IP, headers).
  4. setEventBuilder

    • Links an event builder for logging and tracking events.
    • Purpose: Captures authentication-related events for audit logs.
  5. authenticateOnly

    • Executes the authentication flow without triggering the complete login process.
    • Purpose: Useful for validating partial flows or custom authentication logic.

When to Use AuthenticationProcessor?

1. Custom Authentication Scenarios

Example: Multi-Factor Authentication (MFA) for High-Risk Users

  • Scenario: Require additional steps (e.g., OTP validation) for users flagged as high-risk.
  • Implementation:
AuthenticationProcessor processor = new AuthenticationProcessor();
processor.setAuthenticationSession(authSession)
         .setFlowId("mfa-flow-id")
         .setRealm(realm)
         .setSession(session);

if (isHighRiskUser(user)) {
    Response challenge = processor.authenticateOnly();
    if (challenge.getStatus() == Response.Status.UNAUTHORIZED.getStatusCode()) {
        // Handle failed MFA step
    }
}

Benefit: Ensures that only authorized and verified users can access sensitive resources.


2. Dynamic Flow Adjustments

Example: Conditional OTP Validation Based on Device Trust

  • Scenario: Skip OTP validation for trusted devices.
  • Implementation:
boolean isTrustedDevice = checkDeviceTrust(user, session);

if (!isTrustedDevice) {
    AuthenticationProcessor processor = new AuthenticationProcessor();
    processor.setAuthenticationSession(authSession)
             .setFlowId("otp-validation-flow")
             .setRealm(realm)
             .setSession(session);

    Response challenge = processor.authenticateOnly();
    if (challenge.getStatus() != Response.Status.OK.getStatusCode()) {
        // OTP validation failed
    }
}

Benefit: Provides a seamless user experience by reducing unnecessary authentication steps for trusted devices.


3. Seamless Integration with External Systems

Example: Validate External Token Before Proceeding

  • Scenario: Authenticate users based on an external token provided by an SSO provider.
  • Implementation:
boolean isValidToken = validateExternalToken(externalToken);

if (isValidToken) {
    authSession.setAuthenticatedUser(user);
} else {
    AuthenticationProcessor processor = new AuthenticationProcessor();
    processor.setAuthenticationSession(authSession)
             .setFlowId("external-token-fallback-flow")
             .setRealm(realm)
             .setSession(session);

    Response fallbackChallenge = processor.authenticateOnly();
}

Benefit: Ensures consistent and secure integration with third-party systems while maintaining user experience.


Benefits of Using AuthenticationProcessor

  1. Dynamic Flow Management:

    • Adapt flows based on runtime conditions (e.g., device trust, user role).
  2. Enhanced Security:

    • Add conditional checks to mitigate risks (e.g., geolocation-based MFA).
  3. Reusability:

    • Encapsulate authentication logic in a reusable way for custom extensions.
  4. Granular Control:

    • Execute specific parts of an authentication flow without completing the entire login process.
  5. Scalability:

    • Enables consistent user authentication across multiple realms or tenant configurations.

Best Practices

  1. State Management:

    • Use AuthenticationSessionModel to store temporary data like OTPs or risk flags.
  2. Error Handling:

    • Check the response status of authenticateOnly and handle failures gracefully.
  3. Event Logging:

    • Integrate with Keycloak’s event builder to capture detailed logs for security audits.
  4. Performance Optimization:

    • Avoid unnecessary flow executions by using conditional logic.
  5. Security Hardening:

    • Regularly review and test custom flows to identify vulnerabilities.
  6. Documentation:

    • Maintain clear and concise documentation for custom authentication flows to simplify future updates.

Conclusion

The AuthenticationProcessor is a versatile tool that unlocks the ability to define and execute custom authentication flows in Keycloak. By leveraging its API, you can create secure, user-friendly, and dynamic authentication processes tailored to your specific requirements. Whether handling MFA, integrating with external systems, or implementing risk-based authentication, the AuthenticationProcessor provides a robust foundation for advanced authentication logic.

By following best practices and aligning with business goals, you can maximize the potential of Keycloak’s authentication features while ensuring security, scalability, and ease of use.

Using Action Sets in Keycloak Authentication: Configuring Multi-Step Flows

Using Action Sets in Keycloak Authentication: Configuring Multi-Step Flows

Keycloak offers powerful mechanisms to implement custom authentication flows using action sets and authentication sessions. With this flexibility, you can create both simple and advanced authentication processes, such as passwordless login, multi-factor authentication (MFA), and OTP-based verification. This guide provides a detailed walkthrough on configuring these action sets, alongside best practices and advanced strategies to maximize security and usability.


What Are Action Sets in Keycloak?

Action sets in Keycloak define the individual steps that comprise an authentication flow. By combining and sequencing these steps, you can create tailored workflows that meet specific requirements.

Key Concepts

  1. Action: A single step in an authentication process, such as verifying credentials or sending an OTP.
  2. Action Set: A group of actions that belong to an authentication step.
  3. Execution Flow: Defines the sequence in which actions are executed.

Configuring Multi-Step Authentication Flows

Example 1: Two-Step Authentication Flow (Password + OTP)

Step 1: Validate Password

  • Action: PasswordForm
    • Accepts the username and password from the user for validation.
    • Execution Requirement: Required

Step 2: Validate OTP

  • Action: OTPForm
    • Sends an OTP to the user’s registered email or phone and validates it.
    • Execution Requirement: Required

Configuration Example

{
  "alias": "Password + OTP",
  "authenticationExecutions": [
    {
      "authenticator": "password-form",
      "requirement": "REQUIRED",
      "priority": 10
    },
    {
      "authenticator": "otp-form",
      "requirement": "REQUIRED",
      "priority": 20
    }
  ]
}

Example 2: Passwordless Login with Device Trust

Step 1: Username/Email Input

  • Action: UserNameForm
    • Allows the user to input their username or email.
    • Execution Requirement: Required

Step 2: OTP Validation

  • Action: OTPForm
    • Sends and validates an OTP for passwordless login.
    • Execution Requirement: Required

Step 3: Device Trust Validation

  • Action: DeviceTrustCheck
    • Verifies if the login is from a trusted device or browser.
    • Execution Requirement: Conditional (skipped for trusted devices)

Configuration Example

{
  "alias": "Passwordless with Device Trust",
  "authenticationExecutions": [
    {
      "authenticator": "username-form",
      "requirement": "REQUIRED",
      "priority": 10
    },
    {
      "authenticator": "otp-form",
      "requirement": "REQUIRED",
      "priority": 20
    },
    {
      "authenticator": "device-trust-check",
      "requirement": "CONDITIONAL",
      "priority": 30
    }
  ]
}

Configuring Action Sets in the Keycloak Admin Console

Step 1: Create a New Authentication Flow

  1. Navigate to Authentication in the Keycloak Admin Console.
  2. Click on the Flows tab.
  3. Click New to create a custom authentication flow.
    • Alias: Enter a name for your flow (e.g., Password + OTP).
    • Built-In: Choose whether this flow is built-in or not (leave unchecked for custom flows).

Step 2: Add Execution Steps

  1. Select your newly created flow and click Add Execution.
  2. In the Add Execution dialog, choose an authenticator (e.g., password-form, otp-form).
  3. Set the Requirement:
    • Required: Mandatory step.
    • Conditional: Executed only if conditions are met.
    • Alternative: One of several possible steps.
  4. Save your changes.

Step 3: Configure Execution Order

  1. Use the Priority field to define the order of execution.
  2. Lower numbers are executed first.

Step 4: Save and Test

  1. Assign the custom flow to the desired client or realm under Bindings.
  2. Test the flow by logging in with a user account.

Dynamically Updating Action Sets in Custom Sign-In Flows

Keycloak allows for dynamic updates to action sets, enabling custom sign-in flows that adapt based on real-time conditions. For instance, you can programmatically determine which actions or steps to include in a flow based on user roles, device type, or risk assessment.

Example: Dynamic Action Injection

Use Case: Adding an OTP Step for High-Risk Logins

Suppose you want to dynamically add an OTP validation step for users logging in from unknown devices or high-risk locations.

Implementation Steps:

  1. Create a Custom Authenticator: Implement a custom authenticator to evaluate conditions and modify the flow dynamically.
public class DynamicActionAuthenticator implements Authenticator {

    @Override
    public void authenticate(AuthenticationFlowContext context) {
        boolean isHighRisk = evaluateRisk(context);

        if (isHighRisk) {
            context.getAuthenticationSession().setAuthNote("add-otp-step", "true");
        }
        context.success();
    }

    private boolean evaluateRisk(AuthenticationFlowContext context) {
        // Implement logic to evaluate risk (e.g., check IP, device trust, or user behavior).
        return true; // Example: Treat all logins as high-risk for demonstration.
    }
}
  1. Modify the Flow Based on Conditions: In subsequent steps, check the AuthNote to determine whether to include additional actions.
public class ConditionalOTPAuthenticator implements Authenticator {

    @Override
    public void authenticate(AuthenticationFlowContext context) {
        String addOtpStep = context.getAuthenticationSession().getAuthNote("add-otp-step");

        if ("true".equals(addOtpStep)) {
            String otp = generateOtp();
            context.getAuthenticationSession().setAuthNote("otp", otp);
            sendOtpToUser(context.getUser(), otp);
            context.challenge(context.form().createForm("otp-form.ftl"));
        } else {
            context.success();
        }
    }

    @Override
    public void action(AuthenticationFlowContext context) {
        String inputOtp = context.getHttpRequest().getDecodedFormParameters().getFirst("otp");
        String sessionOtp = context.getAuthenticationSession().getAuthNote("otp");

        if (inputOtp != null && inputOtp.equals(sessionOtp)) {
            context.success();
        } else {
            context.failure(AuthenticationFlowError.INVALID_CREDENTIALS);
        }
    }
}
  1. Configure the Flow: Add the DynamicActionAuthenticator as the first step in your custom flow and ConditionalOTPAuthenticator as a later step. Ensure the OTP step is configured as Conditional.

Configuring Action Sets in a Custom Provider JAR

To extend Keycloak with custom authentication logic, you can package your custom authenticators and action sets in a provider JAR.

Steps to Create a Provider JAR

1. Implement Custom Authenticators

Create Java classes that implement the Authenticator or AuthenticatorFactory interfaces.

public class CustomAuthenticator implements Authenticator {
    @Override
    public void authenticate(AuthenticationFlowContext context) {
        // Custom authentication logic
        context.success();
    }

    @Override
    public void action(AuthenticationFlowContext context) {
        // Action to perform during this step
    }

    @Override
    public void close() {
        // Clean-up resources if necessary
    }
}

2. Register Authenticators in a Factory

Provide metadata and configuration for your authenticators.

public class CustomAuthenticatorFactory implements AuthenticatorFactory {
    @Override
    public Authenticator create(KeycloakSession session) {
        return new CustomAuthenticator();
    }

    @Override
    public String getId() {
        return "custom-authenticator";
    }

    @Override
    public void init(Config.Scope config) {}

    @Override
    public void postInit(KeycloakSessionFactory factory) {}

    @Override
    public void close() {}
}

3. Package the Provider as a JAR

  • Compile your classes and package them into a JAR file.
  • Include a META-INF/services/org.keycloak.authentication.AuthenticatorFactory file listing your factory class.

4. Deploy the JAR

  • Copy the JAR file to the providers directory of your Keycloak server.
  • Restart Keycloak to load the custom provider.

5. Configure in Admin Console

  • Your custom authenticators will now appear as options in the Add Execution dialog when configuring authentication flows.

Best Practices for Multi-Step Authentication

  1. Simplify User Experience

    • Reduce the number of steps to the minimum required for security.
    • Use conditional actions to skip unnecessary steps (e.g., skipping OTP for trusted devices).
  2. Enhance Security

    • Validate inputs at every step.
    • Use short-lived tokens for OTPs to mitigate replay attacks.
    • Implement rate limiting to prevent brute-force attacks.
  3. Leverage Authentication Sessions

    • Use AuthenticationSessionModel to persist data across steps (e.g., OTP codes, device trust flags).
    • Store temporary data in session attributes rather than fetching it repeatedly from the database.
  4. Handle Errors Gracefully

    • Display user-friendly error messages.
    • Log errors using Keycloak’s event logging system to aid troubleshooting.

Advanced Strategies for Custom Flows

Persisting State Across Steps

Use AuthenticationSessionModel to manage temporary state during multi-step authentication.

Example: OTP Storage and Validation

public class OTPAuthenticator implements Authenticator {

    @Override
    public void authenticate(AuthenticationFlowContext context) {
        String otp = generateOtp();
        AuthenticationSessionModel session = context.getAuthenticationSession();
        session.setAuthNote("otp", otp);
        sendOtpToUser(context.getUser(), otp);
        context.challenge(context.form().createForm("otp-form.ftl"));
    }

    @Override
    public void action(AuthenticationFlowContext context) {
        String inputOtp = context.getHttpRequest().getDecodedFormParameters().getFirst("otp");
        String sessionOtp = context.getAuthenticationSession().getAuthNote("otp");

        if (inputOtp != null && inputOtp.equals(sessionOtp)) {
            context.success();
        } else {
            context.failure(AuthenticationFlowError.INVALID_CREDENTIALS);
        }
    }
}

Conclusion

By leveraging action sets and authentication sessions, Keycloak enables the creation of secure, user-friendly, and flexible authentication flows. Whether implementing simple two-step verification or advanced passwordless login with device trust, following the best practices outlined in this guide will help ensure robust and reliable authentication processes. Start experimenting with custom flows to unlock the full potential of Keycloak!

Using Action Sets in Keycloak Authentication: Managing Multi-Step Flows

 

Keycloak provides powerful mechanisms for implementing custom authentication flows through action sets and authentication sessions. This flexibility allows you to handle simple two-step authentication processes as well as complex multi-step flows, such as Passwordless, Multi-Factor Authentication (MFA), and OTP-based authentication. Let’s dive into how you can achieve these flows using action sets, discuss best practices, and explore some advanced strategies.


What are Action Sets in Keycloak?

Action sets are used to define steps in an authentication flow. Each step in the flow can be assigned specific actions, such as verifying user credentials, sending an OTP, or checking device trust. Keycloak chains these actions together to build custom flows.

Key Concepts:

  1. Action: A single step in the authentication process (e.g., password validation, OTP validation).
  2. Action Set: A collection of actions grouped as part of an authentication step.
  3. Execution Flow: Defines how steps and actions are executed in sequence.

Setting Up Multi-Step Authentication Flows

Example: 2-Step Flow (Password + OTP)

Step 1: Validate Password

  • Action: PasswordForm
    • This form accepts the user’s username and password for validation.
    • Execution Requirement: Required

Step 2: Validate OTP

  • Action: OTPForm
    • This step generates and validates an OTP sent to the user’s registered email or phone.
    • Execution Requirement: Required
{
  "alias": "Password + OTP",
  "authenticationExecutions": [
    {
      "authenticator": "password-form",
      "requirement": "REQUIRED",
      "priority": 10
    },
    {
      "authenticator": "otp-form",
      "requirement": "REQUIRED",
      "priority": 20
    }
  ]
}

Example: 3-Step Flow (Passwordless Login with Device Trust)

Step 1: Username/Email Input

  • Action: UserNameForm
    • Allows the user to input their username or email.
    • Execution Requirement: Required

Step 2: OTP Validation

  • Action: OTPForm
    • Generates and validates an OTP for passwordless login.
    • Execution Requirement: Required

Step 3: Device Trust Validation

  • Action: DeviceTrustCheck
    • Checks whether the login is from a trusted device or browser.
    • Execution Requirement: Conditional (skipped for trusted devices)
{
  "alias": "Passwordless with Device Trust",
  "authenticationExecutions": [
    {
      "authenticator": "username-form",
      "requirement": "REQUIRED",
      "priority": 10
    },
    {
      "authenticator": "otp-form",
      "requirement": "REQUIRED",
      "priority": 20
    },
    {
      "authenticator": "device-trust-check",
      "requirement": "CONDITIONAL",
      "priority": 30
    }
  ]
}

Best Practices for Multi-Step Flows

  1. Keep it User-Friendly

    • Minimize the number of steps unless necessary.
    • Use conditional actions to skip unnecessary steps (e.g., skipping OTP for trusted devices).
  2. Ensure Security

    • Validate inputs at each step.
    • Use short-lived tokens for OTPs to prevent replay attacks.
    • Implement rate limiting to avoid brute-force attacks.
  3. Leverage Authentication Sessions

    • Use the AuthenticationSessionModel to persist state across steps.
    • Store temporary data like OTP codes or device trust flags in the session to avoid re-fetching from the database.
  4. Error Handling

    • Return descriptive error messages without revealing sensitive information.
    • Log errors using Keycloak’s event system.

Code Samples

Persisting State Across Steps

You can use AuthenticationSessionModel to store and retrieve state:

public class OTPAuthenticator implements Authenticator {

    @Override
    public void authenticate(AuthenticationFlowContext context) {
        String otp = generateOtp();
        AuthenticationSessionModel session = context.getAuthenticationSession();
        session.setAuthNote("otp", otp);
        sendOtpToUser(context.getUser(), otp);
        context.challenge(context.form().createForm("otp-form.ftl"));
    }

    @Override
    public void action(AuthenticationFlowContext context) {
        String inputOtp = context.getHttpRequest().getDecodedFormParameters().getFirst("otp");
        String sessionOtp = context.getAuthenticationSession().getAuthNote("otp");

        if (inputOtp != null && inputOtp.equals(sessionOtp)) {
            context.success();
        } else {
            context.failure(AuthenticationFlowError.INVALID_CREDENTIALS);
        }
    }
}

Conditional Flows

Implement conditions to skip steps dynamically:

pup)l != null;
    }
}

When to Use Authentication Sessions

Scenarios:

  1. Multi-Step Authentication

    • Store intermediate data like OTPs, temporary user information, or device trust flags.
  2. Custom State Management

    • Use AuthNotes for short-lived state (e.g., per-login session data).
    • Use user attributes for longer-lived data (e.g., device trust information).

Best Practices:

  • Avoid Sensitive Data in Sessions: Do not store passwords or sensitive tokens in session attributes.
  • Clean Up: Remove session attributes once they are no longer needed to prevent state leakage.
  • Log Events: Track state changes and authentication events using Keycloak’s event logging.

Keycloak’s flexibility with action sets and authentication sessions enables you to design secure, user-friendly, and highly customizable authentication flows. By leveraging these tools effectively, you can handle diverse scenarios, from simple password authentication to sophisticated passwordless and multi-factor workflows.

Understanding Authenticator.createAuthSession in Keycloak

 

Welcome! Today, we're going to explore something called Authenticator.createAuthSession in Keycloak, and I'll explain it in a simple way so that even a 10-year-old can follow along. Imagine we’re setting up a clubhouse, and each member needs a special pass to enter. Keycloak helps us manage these passes and makes sure only the right people get inside.

Now, let’s dive in!


What is Keycloak?

Keycloak is like a big security guard for websites and apps. It keeps track of:

  • Who you are (your username)
  • How you prove it’s you (your password, OTP, or other methods)
  • What you’re allowed to do (permissions)

Whenever you log into an app or website, Keycloak helps you get inside if you’re allowed.


What is an Authentication Session?

When you try to log in, Keycloak creates a temporary "login ticket" for you. This ticket is called an authentication session. Think of it like a temporary pass that lets you move through different checkpoints (like entering your password or verifying an OTP) until you’re fully logged in.

Once the process is done:

  • If you pass all the checkpoints, you get a membership pass (a permanent session).
  • If you fail, your temporary pass is thrown away.

What Does Authenticator.createAuthSession Do?

Authenticator.createAuthSession is a function that creates this temporary pass (authentication session). Let’s understand the lifecycle step by step:

Lifecycle of Authenticator.createAuthSession

1. Check for an Existing Session

  • When a user tries to log in, Keycloak first checks if they already have an authentication session.
  • This prevents creating duplicate sessions for the same user, especially if they accidentally refresh the page or start the process again.
  • If an existing session is found, it will be reused.

2. Create a New Authentication Session

  • If no existing session is found, Keycloak creates a new authentication session.
  • This session acts as a placeholder for the login process and ensures that each step (e.g., entering username, password, or OTP) is tracked properly.
  • This is where the createAuthSession method kicks in to generate this session.

3. Bind the Session to the Realm and Client

  • The new session is linked to:
    • The realm: This is like a section of the clubhouse. It determines which group the user belongs to.
    • The client: This is the app or website the user is trying to access (e.g., the "Clubhouse App").
  • This binding ensures that the session is valid only for that specific app and group.

4. Add an Action to the Session

  • Actions define what the user is trying to do. For example:
    • LOGIN: The user is logging into the app.
    • RESET_PASSWORD: The user wants to reset their password.
  • This action helps Keycloak decide what steps to show next (e.g., OTP verification).

5. Store Session Details in the Context

  • The authentication session is saved in the Keycloak context.
  • This context is like a clipboard that Keycloak uses to store temporary information during the login process.
  • It ensures that every step of the login process has access to the necessary data (e.g., the user’s username or OTP).

6. Handle Expiration

  • If the user doesn’t complete the login process within a certain time (e.g., 5 minutes), the authentication session expires.
  • This ensures that temporary data doesn’t linger around forever and keeps the system secure.

7. Return the Authentication Session

  • Finally, the session is returned to the calling function, so the login process can continue.
  • The session is used for subsequent steps, like verifying the user’s password or sending an OTP.

Example Code (Simplified)

Here’s a fun way to understand createAuthSession with a code example. Think of it as creating a ticket in our clubhouse system:

public class AuthSessionManager {
    public AuthenticationSessionModel createAuthSession(KeycloakSession session, AuthenticationSessionManager authSessionManager) {
        AuthenticationSessionModel authSession = authSessionManager.getCurrentAuthenticationSession(session, session.getContext().getClient());

        if (authSession == null) {
            authSession = authSessionManager.createAuthenticationSession(session, session.getContext().getRealm(), session.getContext().getClient());
            authSession.setAction("LOGIN");

            System.out.println("New authentication session created!");
        } else {
            System.out.println("Reusing existing session.");
        }

        return authSession; // Return the ticket
    }
}

Walkthrough of the Code

  • Step 1: Check if a login ticket (authentication session) already exists.
  • Step 2: If no ticket exists, create a new one.
  • Step 3: Add information to the ticket, like which app you’re logging into.
  • Finally, return the ticket so the login process can continue.

Best Practices for createAuthSession

  1. Handle Expiry Gracefully:

    • Always define an appropriate timeout for authentication sessions to ensure security.
  2. Support Multiple Flows:

    • Use dynamic actions (LOGIN, MFA, PASSWORDLESS, etc.) to handle different login methods.
  3. Log and Monitor:

    • Log session creation and reuse events for debugging and auditing purposes.
  4. Secure Context Data:

    • Ensure sensitive information in the context is encrypted or sanitized properly.

Supporting Multiple Flows

Keycloak can handle various authentication flows, such as:

Passwordless (OTP-Based)

authSession.setAction("OTP_VERIFY");
// Send OTP to the user
otpService.sendOtp(user.getPhoneNumber());

Multi-Factor Authentication (MFA)

authSession.setAction("MFA_VERIFY");
// Trigger MFA using configured methods (e.g., TOTP, push notification)
mfaService.triggerMFA(user.getId());

Username and Password

authSession.setAction("LOGIN");
// Proceed with password verification
authManager.verifyPassword(user, providedPassword);

Using AuthNote in Authentication

What is AuthNote?

AuthNote is a key-value store associated with an authentication session or user session. Think of it as a sticky note attached to a session or user, storing temporary information needed during the login process.

When to Use AuthNote?

  1. Session-Specific Data: Use AuthNote when you need to store data specific to the ongoing authentication session. For example:

    • Storing OTPs during a passwordless or multi-factor authentication (MFA) flow.
    • Keeping track of intermediate steps (e.g., if the user has verified their phone but not their email).
  2. User-Specific Data: Use AuthNote for temporary data about a user that doesn’t need to persist across multiple sessions. For example:

    • Storing a one-time token for resetting passwords.
    • Keeping a flag for specific actions like user consent during login.
  3. Dynamic Flows: Use AuthNote when implementing custom authentication flows with multiple steps.

Code Examples for AuthNote

Passwordless Authentication (OTP-Based)

// Step 1: Generate and store OTP in the session
String otp = otpService.generateOtp();
authSession.setAuthNote("OTP", otp);

// Step 2: Send OTP to the user
otpService.sendOtp(user.getPhoneNumber());

// Step 3: Verify the OTP
String providedOtp = formParameters.get("otp");
String storedOtp = authSession.getAuthNote("OTP");

if (!storedOtp.equals(providedOtp)) {
    throw new AuthenticationException("Invalid OTP");
}

Multi-Factor Authentication (MFA)

// Step 1: Add a note for MFA
String mfaToken = mfaService.generateToken();
authSession.setAuthNote("MFA_TOKEN", mfaToken);

// Step 2: Verify MFA token
String providedMfaToken = formParameters.get("mfaToken");
String storedMfaToken = authSession.getAuthNote("MFA_TOKEN");

if (!storedMfaToken.equals(providedMfaToken)) {
    throw new AuthenticationException("Invalid MFA Token");
}

Things to Remember

  • Expiration Policy: Always configure session expiration to prevent stale sessions.
  • Error Handling: Use clear error messages when authentication fails.
  • Custom Flows: Customize actions to fit your application’s needs.
  • Session Reuse: Avoid creating multiple sessions unnecessarily by checking for existing ones.
  • Security: Protect the session context to prevent leaks.
  • Use AuthNote Wisely: Don’t overload the session with unnecessary data. Only store what’s required for the current flow.

Conclusion

Authenticator.createAuthSession is like a system that gives you a temporary pass to start logging into an app. It’s an