February 21, 2025

Fixing "WaitTimeSeconds is Invalid" Error in Amazon SQS

Introduction

Amazon Simple Queue Service (SQS) is like a postbox for messages in the cloud. It helps different parts of a system talk to each other by sending and receiving messages safely. However, sometimes, you might see an error like this:

"pool-12-thread-1" software.amazon.awssdk.services.sqs.model.SqsException: Value 120 for parameter WaitTimeSeconds is invalid. Reason: Must be >= 0 and <= 20, if provided. (Service: Sqs, Status Code: 400, Request ID: 6778dcd6-b3e8-59ca-a981-bce42253312d)

This error happens when the WaitTimeSeconds setting is greater than 20, which is not allowed. Let's break it down so even a child can understand and see how to fix it.


Understanding the Error

What is WaitTimeSeconds?

Think of WaitTimeSeconds like waiting at a bus stop. If you set it to 0, it's like checking for a bus and leaving immediately if there isn’t one (short polling). If you set it to a number between 1 and 20, it means you wait for some time before deciding no bus is coming (long polling). But if you try to wait more than 20 seconds, SQS says, “That’s too long!” and throws an error.

Why Does This Happen?

If the system tries to wait longer than 20 seconds, Amazon SQS stops it because that’s the maximum waiting time allowed per request.

Common reasons include:

  • Wrong settings in your application code.
  • Misunderstanding of polling rules (thinking you can wait longer than allowed).
  • Forgetting to set a valid value and using a default that is too high.

When and How to Use WaitTimeSeconds

When Should You Use It?

  • If you want to check for new messages quickly, set WaitTimeSeconds = 0 (short polling).
  • If you want to reduce unnecessary requests and save money, set WaitTimeSeconds between 1-20 (long polling).
  • If your system does not need real-time responses, use the maximum 20 seconds to lower API costs and reduce load on your application.

How to Use It Correctly

Example in AWS SDK for Java (v2):

import software.amazon.awssdk.services.sqs.model.ReceiveMessageRequest;

ReceiveMessageRequest receiveMessageRequest = ReceiveMessageRequest.builder()
    .queueUrl(queueUrl)
    .waitTimeSeconds(20) // ✅ Must be between 0 and 20
    .maxNumberOfMessages(10)
    .build();

Recommended Settings for Best Performance

Scenario Recommended WaitTimeSeconds Why?
Real-time applications 0-5 Faster response but may increase API calls.
Standard polling (normal) 10-15 Balanced approach for efficiency.
Batch processing (not urgent) 20 Reduces cost by minimizing API calls.

Fixing the Issue

1. Use a Valid WaitTimeSeconds Value

Make sure it’s between 0 and 20 to avoid errors.

2. Loop If You Need a Longer Delay

If you need more than 20 seconds, use a loop instead of setting an invalid value.

while (true) {
    ReceiveMessageRequest receiveMessageRequest = ReceiveMessageRequest.builder()
        .queueUrl(queueUrl)
        .waitTimeSeconds(20) // Maximum allowed value
        .maxNumberOfMessages(10)
        .build();

    sqsClient.receiveMessage(receiveMessageRequest);
}

3. Set Default Long Polling in Queue Settings

If you want all consumers to wait for a specific time, set the Receive Message Wait Time in the queue settings (Amazon SQS Console → Queue → Edit → Receive Message Wait Time).


Best Practices for Using Long Polling in SQS

  • Use long polling (WaitTimeSeconds > 0) to reduce API requests and costs.
  • Set WaitTimeSeconds to 20 for best efficiency and fewer requests.
  • Use batch processing (maxNumberOfMessages > 1) to optimize performance.
  • Monitor with AWS CloudWatch to track queue performance and adjust settings.
  • Implement retries with exponential backoff to handle failures properly.

Common Problems and Solutions

1. High API Costs Due to Short Polling

  • If WaitTimeSeconds=0, you’ll make too many API requests.
  • Solution: Set WaitTimeSeconds to at least 10-20 seconds.

2. Messages Take Too Long to Appear

  • If messages don’t appear, another system might be processing them due to the visibility timeout setting.
  • Solution: Check and adjust visibility timeout if necessary.

3. System Stops Receiving Messages Suddenly

  • If your app isn’t receiving messages, check if it’s not processing them fast enough.
  • Solution: Increase maxNumberOfMessages and ensure enough workers are running.

Further Reading


Conclusion

The "WaitTimeSeconds is invalid" error in Amazon SQS happens when you set an invalid value above 20. To fix it:

  1. Set WaitTimeSeconds between 0-20.
  2. Use a loop if you need longer delays.
  3. Configure queue settings for default long polling.
  4. Follow best practices for cost and performance efficiency.

By following these recommendations, you can use SQS more effectively, reduce API costs, and avoid common pitfalls. Happy coding! 🚀

February 18, 2025

Custom Annotation in Spring Boot: Restricting Age Below 18

 When making apps, sometimes we need to stop kids under 18 from signing up. Instead of writing the same rule everywhere, we can make a special tag (annotation) to check age easily. Let's learn how to do it step by step!

Why Use Custom Annotations?

Spring Boot has built-in checks like @NotNull (not empty) and @Size (length), but not for age. Instead of writing the same age-checking code again and again, we create a custom annotation that we can use anywhere in the app.

Steps to Create an Age Validator

Step 1: Create the Annotation

This is like making a new sticker that says "Check Age" which we can put on our data fields.

import jakarta.validation.Constraint;
import jakarta.validation.Payload;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

@Constraint(validatedBy = AgeValidator.class)
@Target({ElementType.FIELD, ElementType.PARAMETER})
@Retention(RetentionPolicy.RUNTIME)
public @interface MinAge {
    int value() default 18;
    String message() default "You must be at least {value} years old";
    Class<?>[] groups() default {};
    Class<? extends Payload>[] payload() default {};
}

Understanding the Annotations Used

  • @Constraint(validatedBy = AgeValidator.class): Links the annotation to the AgeValidator class, which contains the logic to validate the age.
  • @Target({ElementType.FIELD, ElementType.PARAMETER}): Specifies where we can use this annotation. Options include:
    • ElementType.FIELD: Can be applied to fields in a class.
    • ElementType.PARAMETER: Can be used on method parameters.
    • Other options: METHOD, TYPE, ANNOTATION_TYPE, etc.
  • @Retention(RetentionPolicy.RUNTIME): Defines when the annotation is available. Options include:
    • RetentionPolicy.RUNTIME: The annotation is accessible during runtime (needed for validation).
    • RetentionPolicy.CLASS: Available in the class file but not at runtime.
    • RetentionPolicy.SOURCE: Only used in source code and discarded by the compiler.

Step 2: Write the Age Checking Logic

This part calculates the age and tells if it's 18 or more.

import jakarta.validation.ConstraintValidator;
import jakarta.validation.ConstraintValidatorContext;
import java.time.LocalDate;
import java.time.Period;

public class AgeValidator implements ConstraintValidator<MinAge, LocalDate> {
    private int minAge;

    @Override
    public void initialize(MinAge constraintAnnotation) {
        this.minAge = constraintAnnotation.value();
    }

    @Override
    public boolean isValid(LocalDate dob, ConstraintValidatorContext context) {
        if (dob == null) {
            return false; // No date means invalid
        }
        return Period.between(dob, LocalDate.now()).getYears() >= minAge;
    }
}

Step 3: Use the Annotation in a User Data Class

Now, we use @MinAge to check age whenever someone signs up.

import jakarta.validation.constraints.NotNull;
import java.time.LocalDate;

public class UserDTO {
    @NotNull(message = "Please enter your birthdate")
    @MinAge(18)
    private LocalDate dateOfBirth;

    // Getters and Setters
    public LocalDate getDateOfBirth() {
        return dateOfBirth;
    }

    public void setDateOfBirth(LocalDate dateOfBirth) {
        this.dateOfBirth = dateOfBirth;
    }
}

Step 4: Apply Validation in a Controller

When a new user signs up, we check their age automatically.

import jakarta.validation.Valid;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;

@RestController
@RequestMapping("/users")
public class UserController {
    @PostMapping("/register")
    public ResponseEntity<String> registerUser(@RequestBody @Valid UserDTO userDTO) {
        return ResponseEntity.ok("User registered successfully");
    }
}

Step 5: Test the Validation

If someone younger than 18 tries to sign up, they will see this message:

{
  "dateOfBirth": "You must be at least 18 years old"
}

Making Sure Name is Lowercase

Sometimes, we want names to be stored in lowercase automatically. There are two ways to do this:

Option 1: Use @ColumnTransformer (Hibernate)

If using Hibernate, we can transform the value before saving.

import org.hibernate.annotations.ColumnTransformer;

@Entity
public class User {
    @ColumnTransformer(write = "lower(?)")
    private String name;
}

Option 2: Custom Annotation for Lowercase

If we want to ensure lowercase format, we can create a custom annotation.

Step 1: Create the Annotation

import jakarta.validation.Constraint;
import jakarta.validation.Payload;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

@Constraint(validatedBy = LowercaseValidator.class)
@Target(ElementType.FIELD)
@Retention(RetentionPolicy.RUNTIME)
public @interface Lowercase {
    String message() default "Must be lowercase";
    Class<?>[] groups() default {};
    Class<? extends Payload>[] payload() default {};
}

Step 2: Create the Validator

import jakarta.validation.ConstraintValidator;
import jakarta.validation.ConstraintValidatorContext;

public class LowercaseValidator implements ConstraintValidator<Lowercase, String> {
    @Override
    public boolean isValid(String value, ConstraintValidatorContext context) {
        return value != null && value.equals(value.toLowerCase());
    }
}

Step 3: Use the Annotation

public class UserDTO {
    @Lowercase
    private String name;
}

Recommended Approach

  • If using Hibernate, @ColumnTransformer(write = "lower(?)") is simple and works well.
  • If working with validations, a custom @Lowercase annotation ensures that input is already correct.
  • A hybrid approach: Apply both for best consistency.

Conclusion

By making a custom @MinAge annotation, we ensure kids under 18 cannot register. Similarly, using a @Lowercase annotation or Hibernate’s built-in transformation helps maintain data consistency. These techniques keep our code clean, reusable, and easy to maintain.

Hope this helps! Happy coding! 🚀

February 12, 2025

The Best Strategy for Branch Deployment in QA, Pre-Prod, and Prod

In modern software development, efficient branch deployment is crucial for maintaining a smooth development cycle while ensuring quality and stability. A well-defined branching strategy helps teams manage deployments effectively, reduce conflicts, and improve collaboration between developers, testers, and operations teams. This blog will outline the best and easiest-to-handle strategy for branch deployment, focusing on fewer steps and a streamlined Dev-to-Prod pipeline.




Why a Simplified Deployment Strategy Matters?

  1. Reduces Complexity – Fewer branches mean less confusion.
  2. Faster Time to Production – Automates testing and approvals.
  3. Ensures Stability – Prevents unfinished code from reaching production.
  4. Supports Quick Fixes – Hotfixes are easy to apply.
  5. Improves Collaboration – Developers, testers, and DevOps work seamlessly.
  6. Enhances Code Quality – Ensures tested and stable features reach production.

Latest and Simplified Branching Strategy

1. Branch Structure

To optimize deployment speed, we follow a streamlined approach:

  • main (or master) – Always production-ready.
  • Feature branches (feature/XYZ) – Developers work on new features.
  • QA branch (qa) – All feature branches merge here for initial testing.
  • Pre-Prod branch (pre-prod) – Stable, tested code moves here before production release.
  • Hotfix branches (hotfix/XYZ) – Urgent fixes branched from main.

2. Workflow for Deployment

Dev to QA Deployment Process

  1. Developers create feature branches from main.
  2. After local testing, feature branches are merged into qa.
  3. QA testing is performed on qa, and fixes are pushed directly to qa.
  4. Once QA approves, qa is merged into pre-prod for staging validation.

Pre-Prod to Prod Deployment Process

  1. Once testing in pre-prod is complete, pre-prod is merged into main.
  2. Production deployment is triggered via CI/CD.
  3. If issues are found in Production, fixes are applied via hotfix/XYZ and merged into main, pre-prod, and qa.
  4. CI/CD ensures automated rollback in case of failures.

CI/CD Pipeline Integration

A robust CI/CD pipeline should automate deployments based on branch activities:

  • QA: Auto-deploy builds from qa for initial testing.
  • Pre-Prod: Auto-deploy from pre-prod for staging.
  • Production: Auto-deploy from main after approval.

Key Steps:

  • Automated Testing: Runs on every merge, including unit, integration, and regression tests.
  • Code Quality Checks: Ensures best practices using tools like SonarQube, ESLint, or Checkstyle.
  • Security Scanning: Identifies vulnerabilities before deployment.
  • Approval Gates: Manual approval before going live in production.
  • Blue-Green Deployment: Ensures zero downtime releases.
  • Canary Releases: Gradually roll out changes to a subset of users before full deployment.

Best Practices for Easy Handling

  1. Minimal Branches – Stick to main, qa, pre-prod, and feature/*.
  2. Use Trunk-Based Development – Avoid long-lived branches.
  3. Automate Everything – CI/CD handles deployments, testing, and security scanning.
  4. Quick Rollbacks – Use feature flags and automated rollbacks.
  5. Sync Environments – Ensure qa, pre-prod, and prod stay aligned.
  6. Monitor Performance – Use APM tools (New Relic, Datadog) to track issues.
  7. Infrastructure as Code (IaC) – Automate environment provisioning using Terraform or Ansible.
  8. Document Everything – Maintain clear deployment playbooks for the team.

Conclusion

A streamlined branch deployment strategy reduces complexity, improves efficiency, and ensures high-quality production releases. By minimizing the number of branches, integrating a robust CI/CD pipeline, and adopting best practices like automated testing, blue-green deployments, and canary releases, teams can achieve a seamless Dev-to-Prod workflow with minimal manual steps.

What branching strategy do you follow? Let us know in the comments!

February 11, 2025

Is-A vs Has-A vs Association vs Composition vs Multiple & Multilevel Inheritance

When designing object-oriented systems, understanding relationships between classes is crucial. Terms like Is-A, Has-A, Association, Composition, Multiple Inheritance, and Multilevel Inheritance often create confusion. Whether you're a beginner or have 15 years of experience, this blog will provide a detailed yet easy-to-understand explanation with examples, along with insights into SOLID principles, Design Patterns from Head First Design Patterns, and best practices for future-proof design.




1. Is-A Relationship (Inheritance)

Definition:

An "Is-A" relationship is based on inheritance. It means one class is a subtype of another class, forming a hierarchy.

Example:

A Dog is an Animal, so it inherits behavior from Animal.

class Animal {
    void makeSound() {
        System.out.println("Animal makes a sound");
    }
}

class Dog extends Animal {
    void bark() {
        System.out.println("Dog barks");
    }
}

public class Main {
    public static void main(String[] args) {
        Dog dog = new Dog();
        dog.makeSound(); // Inherited from Animal
        dog.bark();      // Dog-specific behavior
    }
}

2. Has-A Relationship (Composition & Aggregation)

Definition:

A "Has-A" relationship means one class contains another class as a field, forming a dependency. This is called composition or aggregation.

Example 1 (Composition - Strong Relationship):

A Car has an Engine. The engine is tightly coupled with the car and cannot exist independently.

class Engine {
    void start() {
        System.out.println("Engine starts");
    }
}

class Car {
    private Engine engine;
    
    Car() {
        engine = new Engine(); // Strong relationship (Composition)
    }
    
    void startCar() {
        engine.start();
        System.out.println("Car is running");
    }
}

public class Main {
    public static void main(String[] args) {
        Car car = new Car();
        car.startCar();
    }
}

Example 2 (Aggregation - Weak Relationship):

A Library has Books, but if the library is destroyed, books can still exist.

class Book {
    private String title;
    
    Book(String title) {
        this.title = title;
    }
    
    void display() {
        System.out.println("Book: " + title);
    }
}

class Library {
    private List<Book> books;
    
    Library(List<Book> books) {
        this.books = books; // Weak relationship (Aggregation)
    }
    
    void showBooks() {
        for (Book book : books) {
            book.display();
        }
    }
}

public class Main {
    public static void main(String[] args) {
        List<Book> books = Arrays.asList(new Book("Head First Design Patterns"), new Book("Effective Java"));
        Library library = new Library(books);
        library.showBooks();
    }
}

3. Association (A General Relationship)

Definition:

Association is a general term for any relationship between two independent classes without ownership.

Example:

A Student and Teacher have an association.

class Student {
    private String name;
    
    Student(String name) {
        this.name = name;
    }
    
    void display() {
        System.out.println("Student: " + name);
    }
}

class Teacher {
    private String subject;
    
    Teacher(String subject) {
        this.subject = subject;
    }
    
    void teach(Student student) {
        System.out.println("Teacher teaching: " + subject);
        student.display();
    }
}

public class Main {
    public static void main(String[] args) {
        Student student = new Student("Alice");
        Teacher teacher = new Teacher("Math");
        
        teacher.teach(student);
    }
}

4. Multiple Inheritance (Interface-Based)

Definition:

Java does not support multiple inheritance with classes but allows it using interfaces.

Example:

interface Flyable {
    void fly();
}

interface Swimmable {
    void swim();
}

class Duck implements Flyable, Swimmable {
    public void fly() {
        System.out.println("Duck can fly");
    }
    
    public void swim() {
        System.out.println("Duck can swim");
    }
}

public class Main {
    public static void main(String[] args) {
        Duck duck = new Duck();
        duck.fly();
        duck.swim();
    }
}

5. Multilevel Inheritance

Definition:

Multilevel inheritance forms a chain of inheritance.

Example:

class Animal {
    void eat() {
        System.out.println("Animal eats");
    }
}

class Mammal extends Animal {
    void walk() {
        System.out.println("Mammal walks");
    }
}

class Dog extends Mammal {
    void bark() {
        System.out.println("Dog barks");
    }
}

public class Main {
    public static void main(String[] args) {
        Dog dog = new Dog();
        dog.eat();
        dog.walk();
        dog.bark();
    }
}

Comparison Table

Concept Definition Example
Is-A (Inheritance) Class is a type of another class Dog extends Animal
Has-A (Composition) Class contains another class Car has an Engine
Aggregation Class contains another class with weak dependency Library has Books
Association General relationship without ownership Student and Teacher
Multiple Inheritance A class implements multiple interfaces Duck implements Flyable, Swimmable
Multilevel Inheritance A class extends another, forming a chain Dog extends Mammal extends Animal

Final Thoughts

Understanding these concepts is crucial for designing scalable, maintainable software. Applying these principles correctly will make you a better software architect.

What are your thoughts? Do you prefer Is-A or Has-A in your projects? Let me know in the comments!

February 7, 2025

Fixing 'Port Already in Use' Error When Starting a Web Server

Imagine you have a toy car, but when you try to play with it, someone else is already using it. You either have to ask them to stop or find another toy to play with. Similarly, when you start your web server, sometimes another program is already using the port (like a parking spot for your server). Here's how to fix that!

1. Find Out Who is Using the Port

Before you can fix the issue, you need to see which program is using the port (like checking who is playing with your toy).

  • Windows (Command Prompt):

    netstat -ano | findstr :8082
    

    The last number in the output is the process ID (PID), which tells you which program is using the port.

  • Mac/Linux (Terminal):

    lsof -i :8082
    

    This will list the program using port 8082.

  • Another way for Mac (using netstat):

    netstat -anv | grep 8082
    

    This also shows details about which app is using the port.

2. List All Running Processes and Ports

If you want to see all the busy ports and running processes (like checking which toys are being used in the whole room):

  • Windows:

    netstat -aon
    
  • Linux/macOS:

    netstat -tulnp
    

    or

    ss -tulnp
    
  • Mac Users (Using Homebrew lsof): First, install lsof (if not installed):

    brew install lsof
    

    Then run:

    lsof -nP -iTCP -sTCP:LISTEN
    

    This will list all processes listening on different ports.

3. Stop the Process Using the Port

Once you know which process is using the port, you can ask it to stop.

  • Windows:

    taskkill /PID <PID> /F
    

    Replace <PID> with the actual process ID.

  • Mac/Linux:

    kill -9 <PID>
    

    This forcefully stops the process.

4. Force Stop the Process (If It Won’t Quit)

If the process doesn’t stop normally, you can forcefully end it.

  • Windows:

    taskkill /IM <process-name> /F
    

    Replace <process-name> with the actual application name.

  • Mac/Linux:

    sudo pkill -9 -f <process-name>
    

    or

    sudo killall -9 <process-name>
    
  • Mac Users (Using Homebrew htop): If you like a visual way to see and stop processes, install htop:

    brew install htop
    

    Then run:

    htop
    

    Use the arrow keys to select the process and press F9 to stop it.

5. Change the Port Instead

If you don’t want to stop the process, you can change your web server to use a different port (like finding another parking space).

  • Spring Boot: Change application.properties or application.yml:

    server.port=8083
    
    server:
      port: 8083
    
  • Node.js/Express:

    const port = process.env.PORT || 8083;
    
  • Tomcat: Edit conf/server.xml:

    <Connector port="8083" ... />
    

6. Restart Your Computer

If nothing works, restarting your computer can free up the port (like clearing the whole room so you can play with your toy again).

7. Common Port Issues and Solutions

Issue Solution
Port is in use by another process Find and kill the process using lsof, netstat, or taskkill.
Port is not being released after stopping the app Use kill -9 or taskkill /F to forcefully terminate it.
Firewall or antivirus blocking the port Temporarily disable the firewall and check connectivity.
Process restarts automatically after being killed Find and stop the service using systemctl, sc stop, or launchctl.
Need to free up all used ports Use netstat -aon or lsof -i to list all used ports and terminate unnecessary processes.

By following these steps, you can easily fix the "Port Already in Use" error and get your web server running smoothly!

Mastering Logical Puzzles: The Ultimate Brain Workout

Introduction

Logical puzzles are a powerful way to challenge your reasoning skills, enhance problem-solving abilities, and push the limits of your critical thinking. From classic riddles to advanced logic conundrums, these puzzles are designed to test your deductive abilities in unique ways. In this blog, we’ll explore the world of logical puzzles, showcase some of the most difficult ones, and provide solutions to unravel their mysteries.




Understanding Logical Puzzles

Logical puzzles require a step-by-step reasoning approach to arrive at a conclusion. These puzzles often involve conditions, sequences, constraints, and logical deduction. They can be found in many forms, including:

  • Grid-based logic puzzles (e.g., Einstein’s Riddle)
  • Number-based logic puzzles (e.g., Sudoku variations)
  • Word logic puzzles (e.g., Lateral Thinking puzzles)
  • Deductive reasoning puzzles (e.g., Knights and Knaves)

Now, let's dive into some advanced and challenging logical puzzles along with their solutions.


1. The Hardest Logic Puzzle Ever

Puzzle: Three gods—Truth, Lie, and Random—are standing before you. Truth always tells the truth, Lie always lies, and Random answers randomly. You can ask three yes-or-no questions, each directed at only one god at a time, to determine which god is which. However, the gods answer in their own language, which only means ‘yes’ or ‘no’ but you don’t know which is which. How do you determine the identities of the gods?

Solution: The key to solving this puzzle is to construct self-referential questions that remain valid regardless of how ‘yes’ and ‘no’ are interpreted. One effective strategy involves asking: “If I asked you whether God X is Truth, would you say yes?” This eliminates ambiguity by forcing consistent logical responses.


2. Einstein’s Riddle

Puzzle: There are five houses, each painted a different color. In each house lives a person of a different nationality. These five homeowners each drink a different beverage, own a different pet, and smoke a different brand of cigarettes. The following clues are given:

  1. The Brit lives in the red house.
  2. The Swede keeps dogs.
  3. The Dane drinks tea.
  4. The green house is immediately to the left of the white house.
  5. The owner of the green house drinks coffee.
  6. The person who smokes Pall Mall rears birds.
  7. The owner of the yellow house smokes Dunhill.
  8. The man living in the center house drinks milk.
  9. The Norwegian lives in the first house.
  10. The man who smokes Blend lives next to the one who keeps cats.
  11. The man who keeps horses lives next to the one who smokes Dunhill.
  12. The man who smokes Blue Master drinks beer.
  13. The German smokes Prince.
  14. The Norwegian lives next to the blue house.
  15. The man who smokes Blend has a neighbor who drinks water.

Question: Who owns the fish?

Solution: This puzzle is solved using a logical grid, deducing each homeowner’s attributes step by step. The final answer reveals that the German owns the fish.


3. The Prisoners and the Light Bulb

Puzzle: 100 prisoners are placed in isolated cells with no means of communication. Each day, one randomly chosen prisoner is taken to a special room with a light bulb. They may toggle the bulb or leave it unchanged. The warden tells them that if at some point one prisoner can confidently state that all prisoners have been in the room at least once, they will be set free. If they guess incorrectly, they are executed. The prisoners are allowed to strategize before the process begins. How do they ensure freedom?

Solution: A single designated prisoner (the leader) tracks visits by counting how many times the bulb has been switched on. Whenever a prisoner who has never toggled the switch before enters the room and finds it off, they turn it on. The leader keeps count until it reaches 99, at which point they can confidently state that everyone has visited the room at least once.


4. The Missing Dollar Riddle

Puzzle: Three friends check into a hotel room that costs $30. They each contribute $10. Later, the hotel realizes there was a mistake and the room only costs $25. The manager gives the bellboy $5 to return to the friends. The bellboy, however, keeps $2 for himself and gives $1 back to each friend. Now, each friend has paid $9, totaling $27. The bellboy has $2, making $29. Where is the missing dollar?

Solution: There is no missing dollar. The miscalculation happens because the $27 includes the $2 kept by the bellboy. Instead, it should be framed as: The hotel has $25, the bellboy has $2, and the three friends have $3 back, totaling $30.


Famous High-Difficulty Logical Puzzles

Below is a table of 25 of the most challenging logical puzzles ever created:

# Puzzle Name Description
1 The Hardest Logic Puzzle Ever Three gods with unknown responses
2 Einstein’s Riddle Who owns the fish?
3 The Prisoners and the Light Bulb 100 prisoners must deduce their visits
4 The Missing Dollar Riddle Accounting paradox
5 The Two Doors Riddle One guard lies, one tells the truth
6 The Monty Hall Problem Probability paradox
7 The Three Switches Riddle Identify which switch controls the lightbulb
8 The 100 Hat Riddle Prisoners must guess their hat color
9 The Cheryl’s Birthday Puzzle Logical deduction of a birthday
10 The Blue Eyes Puzzle Self-referential logical elimination
11 The Bridge and Torch Problem Crossing a bridge with constraints
12 The Farmer, Wolf, Goat, and Cabbage Safe river crossing puzzle
13 The Two Egg Problem Finding the highest safe drop floor
14 The 12 Coin Problem Identifying a counterfeit coin
15 The King's Gold Coin Puzzle Weighing to find a fake coin
16 The Knights and Knaves Puzzle Who is lying?
17 The Truth-Tellers and Liars Island Determine truthful individuals
18 The Mislabeled Jars Puzzle Identifying correctly labeled jars
19 The Camel Crossing Riddle Maximizing camel transport
20 The 5 Pirates Problem Logical bargaining for gold
21 The Crossing the Desert Puzzle Resource allocation puzzle
22 The Fork in the Road Puzzle Two paths, one deadly
23 The Forehead Numbers Riddle Deduction using visible numbers
24 The 3 Light Bulbs and 3 Switches Finding the correct bulb connection
25 The Alien Civilization Puzzle Logical translation challenge

Conclusion

Logical puzzles are an excellent way to push the boundaries of your reasoning and analytical skills. Are you ready to take on your next brain workout? Try crafting your own logical puzzle and see if your friends can solve it!

Which of these puzzles challenged you the most? Let us know in the comments! 🧠✨

February 6, 2025

How to Select the Right Design Pattern for Your Software

Design patterns are essential tools in software engineering that provide well-established solutions to common problems. Selecting the right design pattern ensures efficient, maintainable, and scalable code. But how do you decide which one to use? Let’s break it down in a structured, modern approach.

Identifying the Problem Domain

Before choosing a design pattern, identify the type of problem you are solving:

🔹 Object Creation? → Use Creational Patterns
🔹 Object Assembly? → Use Structural Patterns
🔹 Object Interactions? → Use Behavioral Patterns

Creational Patterns (Managing Object Creation)

These patterns focus on the efficient creation of objects while keeping the system flexible and scalable.

Pattern When to Use Simple Story Related Patterns
Singleton Use when only one instance of a class should exist throughout the application. "A school has only one principal managing everything." Factory Method, Prototype
Factory Method Use when the creation process should be delegated to subclasses to maintain flexibility. "A bakery makes different types of cakes using different recipes." Abstract Factory, Builder
Abstract Factory Use when a family of related objects needs to be created without specifying their concrete types. "A car factory produces different models but follows the same process." Factory Method, Prototype
Prototype Use when creating objects is expensive, and cloning existing objects can improve performance. "A painter makes exact copies of their artwork instead of repainting it." Singleton, Builder
Builder Use when constructing a complex object requires step-by-step assembly. "Building a burger with different ingredients step by step." Factory Method, Prototype

Structural Patterns (Organizing Object Composition)

These patterns help structure classes and objects for flexibility and efficiency.

Pattern When to Use Simple Story Related Patterns
Adapter Use when you need to make two incompatible interfaces work together. "A travel adapter allows your charger to work in different countries." Bridge, Facade
Bridge Use when you want to separate abstraction from implementation for better scalability. "A remote control works with different TV brands." Adapter, Composite
Composite Use when you need to treat a group of objects as a single entity. "A tree consists of branches, and branches have leaves, but all are part of the tree." Decorator, Flyweight
Decorator Use when you need to add new behavior dynamically to an object. "Adding extra cheese and toppings to a pizza without changing its base." Composite, Proxy
Facade Use when you need to simplify interactions with a complex subsystem. "A hotel concierge handles all guest requests instead of them contacting each service separately." Adapter, Proxy
Flyweight Use when many objects share similar data, and memory optimization is crucial. "Instead of giving each student a textbook, they all share a library copy." Composite, Proxy
Proxy Use when controlling access to an object is needed, such as for security or caching. "A receptionist verifies visitors before letting them into an office." Decorator, Flyweight

Behavioral Patterns (Managing Object Interactions)

These patterns focus on how objects communicate and collaborate.

Pattern When to Use Simple Story Related Patterns
Observer Use when multiple objects need to be notified of state changes. "A YouTuber uploads a video, and all subscribers get notified." Mediator, Event-driven systems
Strategy Use when multiple algorithms should be interchangeable dynamically. "A game character can switch between running, walking, and swimming modes." State, Template Method
Command Use when you need to encapsulate requests as objects for undo/redo operations. "A TV remote records previous actions, so you can undo a channel change." Chain of Responsibility, Mediator
State Use when an object needs to change behavior dynamically based on internal state. "A traffic light changes behavior based on the current light color." Strategy, Observer
Visitor Use when new operations need to be added to an object structure without modifying it. "A tour guide explains different exhibits without changing the museum setup." Composite, Iterator
Memento Use when object states need to be captured and restored without exposing internal details. "A video game lets players save and load progress at any time." Command, State
Iterator Use when sequential access is needed for a collection without exposing its structure. "Flipping through the pages of a book one by one." Composite, Visitor
Mediator Use when complex communications between objects should be centralized. "An air traffic controller manages communication between multiple pilots." Observer, Chain of Responsibility
Chain of Responsibility Use when a request needs to be processed by multiple handlers in sequence. "A customer service call passes through different departments before getting solved." Command, Mediator
Template Method Use when the structure of an algorithm should be defined, but steps should be customized. "A recipe provides basic steps, but ingredients can be changed." Strategy, Factory Method

Modern Approach to Choosing a Design Pattern

🔹 Scalability Concern? → Use Singleton, Factory, or Prototype to manage object creation efficiently.
🔹 Code Readability? → Use Facade or Adapter to simplify interfaces.
🔹 Extensibility? → Use Decorator or Strategy to add functionality dynamically.
🔹 Performance Optimization? → Use Flyweight or Proxy to reduce memory usage and improve speed.
🔹 Flexible Communication? → Use Observer, Mediator, or Chain of Responsibility to handle dynamic interactions.

Final Thoughts

Choosing the right design pattern depends on your specific problem and project needs. By understanding the problem domain and applying the right pattern, you can build more maintainable and scalable software. Keep experimenting with different patterns to refine your approach and create robust, efficient applications!

January 31, 2025

The Complete Guide to Managing Access Tokens in Keycloak

Keycloak is an open-source identity and access management solution that enables authentication and authorization for modern applications. One of its critical components is Access Tokens, which dictate user access permissions and session validity. Configuring access tokens effectively ensures a balance between security, user experience, and performance.

This guide provides a comprehensive look at managing access tokens in Keycloak. We will cover general practices, custom configurations, common challenges, debugging techniques, and best practices for securing and optimizing access tokens.




Keycloak Access Token Flow


1. Understanding Access Tokens in Keycloak

What is an Access Token?

An Access Token is a short-lived credential used by applications to access protected resources on behalf of a user. Keycloak issues these tokens after authentication, and they are included in API requests for authorization.

Key Access Token Properties:

  • Access Token Lifespan – Determines how long a token remains valid before expiration.
  • Refresh Token Lifespan – Defines the validity period for refresh tokens used to generate new access tokens.
  • SSO Session Timeout – Determines how long a user session remains valid across multiple applications.
  • Client Login Timeout – Specifies the time limit for a client to complete the login process before timeout.



2. Configuring Access Token Settings in Keycloak

Keycloak allows administrators to configure access token settings at different levels: Realm, Client, and User.

A. Changing Access Token Lifespan at the Realm Level

To modify Access Token settings for an entire realm:

  1. Log in to Keycloak Admin Console.
  2. Navigate to Realm Settings > Tokens.
  3. Modify the Access Token Lifespan (e.g., from 5 minutes to 30 minutes).
  4. Click Save.

Example: Updating Token Settings via Keycloak REST API

curl -X PUT "http://localhost:8080/auth/admin/realms/{realm}/clients/{client-id}" \
     -H "Authorization: Bearer {admin-token}" \
     -H "Content-Type: application/json" \
     -d '{ "accessTokenLifespan": 1800 }'  # 30 minutes

B. Configuring Access Token Per Client

If you want different access token settings per client:

  1. Go to Clients in the Keycloak Admin Console.
  2. Select the client to configure.
  3. Open the Advanced Settings tab.
  4. Modify the Client Session Idle Timeout and Client Session Max Timeout.
  5. Click Save.

Use Case: API clients may have different access token requirements than web applications due to security concerns.

C. Setting Access Token Lifespan Per User

To configure access token settings for a specific user:

  1. Navigate to Users in Keycloak Admin.
  2. Select the user for whom you want to set a custom policy.
  3. Modify session-based token policies under Credentials.
  4. Click Save.

Use Case: You might want to grant longer access tokens to admins while limiting standard users.


3. Debugging and Troubleshooting Access Token Issues

A. Common Issues and Fixes

1. Token Expiring Too Soon

  • Check the Access Token Lifespan and SSO Session Timeout settings.
  • Use Refresh Tokens to extend sessions.

2. Invalid or Expired Access Token Errors

Verify token validity using Keycloak’s introspection endpoint:

curl -X POST "http://localhost:8080/auth/realms/{realm}/protocol/openid-connect/token/introspect" \
     -H "Content-Type: application/x-www-form-urlencoded" \
     -d "client_id={client-id}&client_secret={client-secret}&token={access-token}"
  • If expired, adjust token lifespan in Realm Settings.

3. Token Not Containing Required Claims

  • Ensure Client Scopes are correctly configured.
  • Modify mappers in Client Scopes to include necessary claims.

4. Customizing Access Token Behavior in Keycloak

A. Extending Access Token Claims

To add custom claims to access tokens:

  1. Go to Client Scopes in Keycloak Admin.
  2. Create or modify a mapper to include custom claims.
  3. Assign the scope to the client where needed.

Example: Adding a Custom Claim via Java Code

public class CustomTokenMapper extends AbstractOIDCProtocolMapper {
    @Override
    public void transformIDToken(IDToken token, ProtocolMapperModel mappingModel, KeycloakSession session, UserSessionModel userSession, ClientSessionContext clientSessionCtx) {
        token.getOtherClaims().put("custom_claim", "custom_value");
    }
}

B. Implementing Script-Based Custom Token Logic

For advanced scenarios, create a custom provider using Keycloak’s SPI:

  1. Implement a Java class extending OIDCProtocolMapper.
  2. Register it in Keycloak’s providers directory.
  3. Restart Keycloak to apply changes.

5. Best Practices for Access Token Management

  • Shorten Access Token Lifespan: Reduces risk if a token is compromised.
  • Use Refresh Tokens: Instead of extending access token lifespan, leverage short-lived access tokens with refresh tokens.
  • Limit Token Scope: Assign minimal permissions necessary.
  • Enable Token Introspection: Validate access tokens dynamically before granting access.
  • Use Client Credentials Grant for Machine-to-Machine Communication: This avoids unnecessary user-based authentication.

Conclusion

Managing access tokens in Keycloak requires careful consideration of security, usability, and performance. Whether adjusting global settings, client-specific configurations, or per-user access policies, Keycloak offers extensive flexibility to fine-tune authentication and authorization.

By following best practices and leveraging debugging techniques, you can create a secure, scalable, and efficient authentication system tailored to your requirements.

Do you have any Keycloak challenges? Drop a comment below, and let’s solve them together! 🚀

The Journey to Database Performance: A Story of Growth and Optimization

Once upon a time in a bustling tech city, a startup named DataFlow was struggling with performance issues in their application. As their user base grew, their database queries slowed down, frustrating users and causing downtime. The founders, Alex and Mia, embarked on a journey to optimize their database performance, uncovering essential strategies along the way.



Chapter 1: The Need for Speed - Understanding Database Performance

DataFlow's first challenge was understanding why their database was slowing down. They learned that database performance is influenced by:

  • Query Execution Time – Queries should execute within milliseconds (ideally <100ms for most applications).
  • Throughput – The number of transactions processed per second (TPS), typically needing to support thousands of queries per second in high-performance systems.
  • Latency – The delay before receiving a response, which should be minimized to <1ms for key lookups and <10ms for complex queries.
  • Resource Utilization – The CPU, memory, and disk usage, which should be optimized to avoid overloading the system.

Realizing the impact of slow database performance, they set out to explore solutions.

Chapter 2: The Magic of Indexing

Alex and Mia discovered Indexing, a way to speed up queries by minimizing the amount of scanned data.

When to Use Indexing

Indexing is essential when queries frequently search for specific values, filter large datasets, or perform sorting operations. However, excessive indexing can slow down write operations.

How Indexing Evolved

Initially, databases scanned full tables to find relevant data. With indexing, databases could quickly locate records, significantly improving performance.

Implementing Indexing Today

To optimize their queries, they created indexes on frequently searched columns.

CREATE INDEX idx_users_email ON users(email);

Key Lessons:

  • Index only necessary columns to avoid excessive memory usage.
  • Use composite indexes for multi-column searches.
  • Regularly analyze and update indexes.

Chapter 3: The Power of Sharding and Partitioning

As their database grew beyond 500GB of data or more than 100,000 queries per second, single-server storage became a bottleneck. They turned to sharding and partitioning to distribute data efficiently.

When to Use Sharding

Sharding is necessary when:

  • The database exceeds 500GB and is too large for a single server.
  • Read and write operations exceed 100,000 queries per second, causing high latency.
  • The system requires horizontal scalability to distribute load across multiple servers.

The Evolution of Sharding

Early systems stored all data in a single location, leading to failures under heavy loads. Sharding divides the data into multiple smaller, more manageable pieces.

Types of Sharding

  1. Horizontal Sharding – Splitting a table's rows across multiple databases. Example:
    • Users with IDs 1-1M go to Database A.
    • Users with IDs 1M+ go to Database B.
  2. Vertical Sharding – Splitting by feature or module.
    • User profiles stored in Database A.
    • Transactions stored in Database B.
  3. Hash-Based Sharding – Distributing data using a hash function to ensure even distribution.

Implementing Sharding

Using hash-based sharding, they distributed user data across multiple servers.

import hashlib

def get_shard(user_id, num_shards=3):
    return int(hashlib.md5(str(user_id).encode()).hexdigest(), 16) % num_shards

Each user is assigned to a shard, ensuring even distribution and scalability.

Best Practices:

  • Choose a proper sharding key to avoid hot spots.
  • Balance data across shards to prevent uneven loads.
  • Implement a lookup service to track shard assignments.

Chapter 4: Denormalization - The Trade-off Between Storage and Speed

Joins were slowing down their queries. They learned about denormalization, which combines tables to reduce query complexity.

When to Use Denormalization

Denormalization is helpful when:

  • Query response time needs to be under 10ms.
  • The database is read-heavy with complex joins impacting performance.
  • Aggregated data is frequently needed and must be precomputed.

Evolution of Denormalization

Initially, databases followed strict normalization rules. But for performance, denormalization became necessary.

Implementing Denormalization

They stored user details directly within the orders table.

SELECT order_id, user_name, user_email FROM orders;

Key Takeaways:

  • Use denormalization when read performance is critical.
  • Keep redundant data updated across tables.

Chapter 5: Database Replication - Ensuring High Availability

To handle high read requests, they implemented database replication.

When to Use Replication

Replication is useful when:

  • High availability is required (99.99% uptime target).
  • Read-heavy workloads exceed 10,000 queries per second.
  • Disaster recovery mechanisms are necessary.

How Replication Evolved

From single-database models, replication emerged to maintain multiple copies of the same data.

Implementing Replication

They set up a primary database for writes and read replicas for scaling.

ALTER SYSTEM SET wal_level = 'replica';

Best Practices:

  • Monitor replication lag.
  • Distribute read queries to replicas.

Chapter 6: Managing Concurrency with Locking Techniques

With more users accessing the system, they faced concurrency issues. They adopted optimistic and pessimistic locking to prevent conflicts.

When to Use Locking

  • Use optimistic locking for low-contention scenarios.
  • Apply pessimistic locking for high-contention operations with over 100 concurrent transactions per second.

Evolution of Locking

Traditional locking led to performance bottlenecks, evolving into lightweight, optimistic approaches.

Implementing Locking

For optimistic locking, they used version numbers.

UPDATE accounts SET balance = balance - 100 WHERE id = 1 AND version = 1;

Chapter 7: Connection Pooling - Handling High Traffic Efficiently

As traffic increased, they needed connection pooling to manage database connections efficiently.

When to Use Connection Pooling

Connection pooling is necessary when:

  • The application handles over 1,000 concurrent connections.
  • Opening new connections is expensive.
  • The database needs optimized resource utilization.

Implementing Connection Pooling

HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:mysql://localhost:3306/mydb");
config.setMaximumPoolSize(10);
HikariDataSource ds = new HikariDataSource(config);

Chapter 8: Caching - The Ultimate Performance Booster

Finally, they adopted caching to reduce database queries and enhance speed.

When to Use Caching

Caching is beneficial when:

  • The same data is accessed more than 100 times per second.
  • Read performance needs to be sub-millisecond.
  • Reducing database load is a priority.

Implementing Caching with Redis

import redis

cache = redis.Redis(host='localhost', port=6379)
cached_data = cache.get("user:123")
if not cached_data:
    cached_data = fetch_from_db("SELECT * FROM users WHERE id = 123")
    cache.set("user:123", cached_data, ex=3600)

Epilogue: A Faster, More Scalable Future

After implementing these strategies, DataFlow's database became faster, more reliable, and ready for scale. Their journey taught them the importance of continuous monitoring and optimization.

Fixing "Invalid Signature File Digest for Manifest Main Attributes" in Java JAR Files

Introduction

When working with Java applications, particularly when dealing with JAR files, you might encounter the error:

java.lang.SecurityException: Invalid signature file digest for Manifest main attributes

This error typically occurs when a signed JAR file has been modified, corrupted, or is incompatible with the Java runtime. In this post, we'll break down the issue, debug it, troubleshoot possible causes, and provide multiple solutions to fix it permanently.


Understanding the Problem Statement

Why Does This Error Happen?

  • The JAR file was signed but later modified, breaking its digital signature.
  • A dependency in your project (especially from a remote Maven repository) is corrupted or incorrectly signed.
  • Java version incompatibility (different versions handle JAR signing differently).
  • IntelliJ IDEA, Maven, or Gradle caching issues.
  • Incorrectly packaged JAR due to build misconfiguration.
  • JAR contains third-party libraries with outdated or conflicting signatures.

How to Debug and Identify the Issue

Before applying a fix, let’s find out the root cause.

1. Check the Java Version

Run the following command to ensure you're using a compatible Java version:

java -version

If you are using an older or a newer version than expected, try switching to a different version using SDKMAN! or manually setting the correct version.

sdk use java 11.0.16-amzn

2. Verify the Problematic JAR File

Identify the JAR causing the issue:

jar tvf yourfile.jar | grep META-INF

If the JAR is signed, you'll see .SF and .RSA files in the META-INF/ directory. If any changes were made to the JAR, the signature is no longer valid.

3. Check for Maven or Gradle Dependencies Issues

If you're using Maven, try running:

mvn dependency:tree

Check if any third-party dependencies could be causing the issue. If you see a suspicious dependency, try excluding or updating it.

For Gradle users:

gradle dependencies

Troubleshooting and Fixing the Issue

Method 1: Rebuild the JAR Without Signature

If you are building the JAR yourself, try re-packaging it without signing:

jarsigner -verify -verbose -certs yourfile.jar

If verification fails, repackage it without the signature:

zip -d yourfile.jar 'META-INF/*.SF' 'META-INF/*.RSA' 'META-INF/*.DSA'

Then, rebuild the project and check again.


Method 2: Clear the Local Maven Repository (Maven Issue)

Sometimes, Maven downloads a corrupted dependency. Try deleting and redownloading dependencies:

rm -rf ~/.m2/repository
mvn clean package

If the issue persists, manually delete the problematic JAR from ~/.m2/repository and download a fresh copy.

mvn dependency:purge-local-repository
mvn clean install

Method 3: Invalidate IntelliJ IDEA Cache

If you're working in IntelliJ IDEA, caching issues may cause this error. Try the following:

  1. Go to File > Invalidate Caches / Restart
  2. Select Invalidate and Restart
  3. Clean and rebuild your project
mvn clean package

If you use Gradle:

gradle clean build

Method 4: Ensure Java Version Compatibility

If your Java version is causing the issue, switch to a compatible version and rebuild the project.

For Maven, specify the Java version in pom.xml:

<properties>
    <maven.compiler.source>11</maven.compiler.source>
    <maven.compiler.target>11</maven.compiler.target>
</properties>

For Gradle, add this in build.gradle:

targetCompatibility = JavaVersion.VERSION_11
sourceCompatibility = JavaVersion.VERSION_11

Method 5: Redownload the JAR from a Trusted Source

If you suspect the JAR is corrupted, download it manually from a trusted source (e.g., Maven Central Repository, official vendor website) and replace the existing one.

wget https://repo.maven.apache.org/maven2/.../yourfile.jar

Permanent Solutions

1. Avoid Modifying Signed JARs

If your application depends on signed JARs, avoid modifying them after signing. Use a different packaging strategy to prevent accidental tampering.

2. Use jarsigner to Re-sign JARs

If you control the JAR, you can re-sign it with a valid key:

jarsigner -keystore mykeystore.jks -storepass changeit yourfile.jar myalias

3. Automate JAR Verification in CI/CD

To prevent invalid JARs from being used, integrate a verification step in your CI/CD pipeline:

jarsigner -verify -certs yourfile.jar

If the verification fails, reject the build to avoid issues in production.


Conclusion

This error is primarily caused by Java’s security checks on signed JAR files. Depending on the scenario, the best fix may be:

  • Rebuilding the JAR without a signature (if applicable)
  • Cleaning and redownloading dependencies
  • Switching to a compatible Java version
  • Invalidating IDE or Maven caches
  • Ensuring that all JAR files come from trusted sources

By following these debugging and troubleshooting steps, you can resolve the issue and prevent it from occurring in the future.


Have You Encountered This Issue?

Let me know your experience in the comments below! If you found another fix, feel free to share it. 🚀

January 30, 2025

Authenticate vs AuthenticateOnly in Keycloak: Choosing the Right Method for Your Authentication Flow

When implementing authentication in a secure web application, Keycloak provides a flexible authentication system that allows you to handle different steps in the authentication process. But one question that often comes up is: When should I use authenticate() vs authenticateOnly()?

Both methods are essential for different scenarios, but understanding their differences will help you decide when and why to use each one in your app's security workflow.

Let’s break down authenticate() and authenticateOnly(), how they differ, and when to use each for optimal authentication flow management.

What’s the Difference?

1. authenticate(): The Full Authentication Flow

The authenticate() method is used to complete the entire authentication process. This includes everything from credential validation (username/password) to multi-factor authentication (MFA) and token issuance.

Once this method is called, Keycloak marks the user as authenticated, issues any necessary tokens (like access and refresh tokens), and starts a session for that user. The user is now ready to access protected resources and can be redirected to the appropriate page (e.g., the home page, dashboard, etc.).

Key Actions with authenticate():
  • Finalizes the authentication session: Sets up a valid session for the user.
  • Issues tokens: If configured, the access and refresh tokens are generated and associated with the user.
  • Triggers login events: The event system records a login event.
  • Redirects the user: Based on your configuration, the user is sent to the correct post-login location.

2. authenticateOnly(): Validating Credentials Without Completing the Flow

The authenticateOnly() method is a lighter, more specialized method. It is used for validating credentials or performing other checks like multi-factor authentication (MFA) but without finalizing the authentication process.

When you call authenticateOnly(), you’re just checking if the user is valid (for instance, verifying their username/password or MFA token), but you’re not completing the session. This is useful in situations where you might need to verify something before fully logging the user in.

Key Actions with authenticateOnly():
  • Validates credentials: Checks whether the user’s credentials are correct.
  • Doesn’t finalize the authentication session: The session remains uninitialized, and no tokens are issued.
  • No login event: No login event is triggered; the user isn’t officially logged in yet.
  • No redirection: No redirection happens, since the flow isn’t finalized.

When to Use Each Method?

Use authenticate() When:

  • You’re ready to complete the entire authentication flow: After the user’s credentials (and optional MFA) are validated, you call authenticate() to finalize their session.
  • You need to issue tokens: For user sessions that require tokens for API access, you'll need to use authenticate().
  • The user should be able to access the system immediately: After a successful authentication, you want the user to be logged in and able to interact with your system right away.

Use authenticateOnly() When:

  • You need to perform credential validation but don’t want to finish the entire authentication flow (e.g., checking user credentials but still deciding if you need to continue with additional checks like MFA).
  • You’re just verifying the user: For example, if you need to verify MFA before proceeding with final authentication, use authenticateOnly().
  • Skipping token issuance: If you’re only validating the user's credentials but don’t need to issue tokens yet (e.g., in a case where the session state isn’t needed right now).
  • Testing credentials or certain conditions: For pre-checks like validating a password or OTP, but you don’t need to proceed with the user being fully authenticated yet.

Key Differences at a Glance:

Featureauthenticate()authenticateOnly()
Session FinalizationYes, the session is marked as authenticated.No session finalization happens.
Token IssuanceYes, access and refresh tokens are issued.No tokens are issued.
Login EventYes, a login success event is triggered.No login event is triggered.
RedirectYes, redirects the user based on configuration.No redirection occurs.
When to UseWhen the user is fully authenticated and ready for system access.When you need to validate credentials but not finalize the authentication.

How to Fix "Error Occurred: response_type is null" on authenticate() Call

One common issue developers may encounter when using authenticate() is the error: "Error occurred: response_type is null". This typically happens when Keycloak is unable to determine the type of response expected during the authentication flow, which can occur for various reasons, such as missing or misconfigured parameters.

Steps to Fix the Issue:

  1. Check the Authentication Flow Configuration: Ensure that your authentication flow is properly configured. If you're using an OAuth2/OpenID Connect flow, ensure that you are sending the correct response_type in the request. The response_type is typically set to code for authorization code flow, token for implicit flow, or id_token for OpenID Connect flows.

  2. Validate the Client Configuration: The client configuration in Keycloak should specify the correct response_type. Ensure that the Valid Redirect URIs and Web Origins are correctly configured to allow the response type you're using.

  3. Inspect the Request URL: Verify that the request URL you’re using to trigger the authentication flow includes the necessary parameters, including response_type. Missing parameters in the URL can cause Keycloak to not process the authentication correctly.

    Example URL:

    bash
    /protocol/openid-connect/auth?response_type=code&client_id=YOUR_CLIENT_ID&redirect_uri=YOUR_REDIRECT_URI&scope=openid
  4. Use Correct Protocol in Authentication: If you're using a non-standard protocol or custom flows, make sure the appropriate response_type is explicitly specified and is supported by your client.

  5. Debug Logs: Enable debug logs in Keycloak to get more insights into the issue. The logs will help you track the flow and identify which part of the request is causing the problem.

  6. Review Custom Extensions: If you're using custom extensions or modules with Keycloak, ensure that they aren’t interfering with the authentication flow by removing or bypassing necessary parameters.

Real-World Examples

Scenario 1: Full Authentication (Password + MFA)

You have a system that requires both a password and multi-factor authentication (MFA). Once the password is verified and the MFA code is correct, you want to fully authenticate the user and issue tokens.


if (isPasswordValid() && isMFAValid()) { processor.authenticate(); // Complete the authentication flow } else { throw new AuthenticationException("Invalid credentials or MFA failed"); }

Scenario 2: MFA Verification Only

Imagine you’ve already validated the user’s password, but now you need to verify their MFA code. You use authenticateOnly() to verify the MFA, but you don’t want to finalize the authentication session until everything is validated.


if (isMFAValid()) { processor.authenticateOnly(); // Only validate MFA, don’t finalize the session yet } else { throw new AuthenticationException("Invalid MFA code"); }

Conclusion:

Understanding when to use authenticate() versus authenticateOnly() is crucial for optimizing your authentication flows in Keycloak. If you’re finalizing the user’s login and granting access, use authenticate(). If you’re performing intermediate checks or need to validate credentials without finalizing the session, authenticateOnly() is the better option.

In case you're facing the error "response_type is null", ensure that your authentication request includes the correct parameters, the client is properly configured, and the correct flow is being used.

By leveraging these methods appropriately, you can create a more secure and efficient authentication process for your users, giving you fine-grained control over how and when authentication happens in your system.

January 29, 2025

The Ultimate Guide to Open-Source Java Frameworks

 Java remains one of the most widely used programming languages in the world, and a large part of its success can be attributed to the vast ecosystem of open-source frameworks available for developers. These frameworks help streamline development, improve efficiency, and enhance scalability across different types of applications.

In this blog post, we'll explore some of the most popular open-source Java frameworks categorized by their use cases, sorted by start year, along with when you should use them, upcoming trends, and whether you should learn them.

Java Frameworks Overview

Category Framework Start Year Description When to Use Upcoming Trends Should I Learn?
Web Development Jakarta EE 1999 Enterprise Java development Large-scale enterprise applications More cloud-native features Yes, for enterprise Java apps
Spring MVC 2003 Web application framework Standard MVC-based web apps Better developer productivity tools Yes, widely used
Apache Struts 2000 MVC framework Enterprise web applications Security updates No, outdated, but useful for legacy systems
Play Framework 2007 Reactive web framework High-performance reactive applications Performance optimizations Yes, if building reactive web apps
JHipster 2013 Generates Spring Boot and frontend apps Rapid development of modern web apps AI-driven code generation Yes, for full-stack developers
Spring Boot 2014 Microservices and enterprise applications Quick setup for enterprise and microservices Serverless computing, AI integration Yes, essential for modern Java devs
Microservices & Cloud Apache Camel 2007 Enterprise integration framework Complex integration patterns API-driven integrations Yes, for enterprise integration
Dropwizard 2011 Lightweight RESTful microservices Quick REST API development Enhanced resilience tools Yes, for simple microservices
Eclipse Vert.x 2013 Reactive applications toolkit High-throughput reactive apps Improved concurrency support Yes, for high-performance apps
AI & Machine Learning WEKA 1993 ML algorithms for data mining Research and experimentation Enhanced deep learning support Yes, for data science
Apache Mahout 2008 Machine learning Big data analytics More big data support Yes, for big data applications
Deeplearning4j 2014 Deep learning on JVM Neural networks on Java More pre-trained models Yes, for AI in Java
Deep Java Library (DJL) 2019 Deep learning for Java Java-based AI applications Improved GPU acceleration Yes, for AI enthusiasts
Policy & Rule Engines Drools 2001 Business rule engine Complex business logic Improved AI-driven decision-making Yes, for business applications
Rego (OPA) 2016 Policy-as-code framework Cloud security policies More integrations with cloud security Yes, for cloud security
Messaging & Notifications Apache Kafka 2011 Distributed event streaming platform Real-time data processing and event-driven systems AI-driven automation Yes, for scalable event-driven systems
RabbitMQ 2007 Message broker Asynchronous messaging Enhanced reliability and scaling Yes, for decoupled microservices
Twilio Java SDK 2008 SMS and voice API integration Sending OTP, SMS, voice calls AI-powered messaging Yes, for communication-based apps
Firebase Cloud Messaging (FCM) 2016 Push notification service Mobile and web notifications More advanced delivery features Yes, for mobile and web apps
Email Solutions JavaMail API 1997 Email handling in Java Sending and receiving emails Enhanced security and cloud support Yes, for email-based apps
Apache James 2003 Email server and mail handling Custom mail servers AI-powered spam filtering Yes, for enterprise email solutions

How to Work with Video, Image, Audio, AI, PDF, Docs, QR Code, Payment Solutions, OTP, SMS, Email, and Notifications in Java

Use Case Framework Description
Video Processing Xuggler Java library for video encoding and decoding
JavaCV Wrapper for OpenCV with video processing support
Image Processing OpenIMAJ Open-source image and video processing library
Marvin Framework Image processing algorithms and filters
Audio Processing TarsosDSP Audio signal processing library
JAudioTagger Java library for reading and editing audio metadata
AI & LLM Deep Java Library (DJL) Deep learning framework for Java
Stanford NLP Natural Language Processing (NLP) toolkit
PDF & Document Handling Apache PDFBox Library for handling PDFs in Java
iText PDF generation and manipulation library
QR Code Generation ZXing Java-based barcode and QR code generator
QRGen QR code generator built on top of ZXing
Payment Solutions JavaPay API for integrating payment gateways
Stripe Java SDK Library for handling payments with Stripe
OTP & SMS Twilio Java SDK API for sending OTPs and SMS messages
Firebase Authentication OTP-based authentication for mobile and web apps
Email & Notifications JavaMail API Java library for sending emails
Firebase Cloud Messaging (FCM) Push notification service for mobile and web apps

Conclusion

Java has a rich ecosystem of open-source frameworks catering to various domains such as web development, microservices, AI, security, messaging, and multimedia. Whether you should learn a framework depends on your use case and career goals. With the rise of AI, cloud computing, and real-time applications, staying up to date with the latest frameworks will keep you ahead in the industry.