Posts

Apache kafka using kraft

Image
Getting Started with Kafka in Kraft Mode: A Step-by-Step Guide Kafka is a powerful platform for real-time data processing. Traditionally, it relied on ZooKeeper for controller election and state management. However, Kraft mode, introduced in Kafka 3.0, offers significant improvements in reliability, performance, and manageability. This blog post provides a step-by-step guide to running Kafka in Kraft mode, helping you unlock its benefits. Let's dive in! Understanding Kafka Configuration Files: Navigating the Configuration Directory: Bash cd /opt/kafka ls config/kraft This command navigates to the Kafka configuration directory and lists files specific to Kraft mode. Configuration File Breakdown: broker.properties : This file manages topic partitioning and data storage/retrieval. controller.properties : Here lies the configuration for Kraft-based leader election. server.properties : This file combines the settings of both broker.properties and controller.properties for a strea...

Apache kafka Setup in Google cloud

Image
This blog post guides you through setting up a basic Kafka environment on Google Cloud Platform for learning purposes. Kafka is a powerful distributed event streaming platform used for real-time data processing. We'll walk through launching a Kafka cluster, creating a topic, and sending and consuming messages. Kafka is a distributed event streaming platform that lets you read, write, store, and process events (also called records or messages in the documentation) across many machines. Prerequisites: A Google Cloud Platform account Steps: Deploying Kafka: Head over to the Google Cloud Marketplace: https://console.cloud.google.com/marketplace/product/google/kafka Click on "LAUNCH" and proceed with the deployment configuration. Important: For service account, you can choose an existing one or create a new one with appropriate permissions. Select a deployment region closest to you for optimal performance. Keep the disk space settings at default for this learning exe...

Kubernetes and Helm packing

Image
SETUP for Kubernetes Install the docker and check the version. (Visit the docker website to install) check the version using the command docker version Install Minikube  wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 cp minikube-linux-amd64 /usr/local/bin/minikube sudo chmod 755 /usr/local/bin/minikube minikube version minikube start ->to start the Kubernetes this command will take a couple of minutes to complete. Install Kubectl curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl   give permission for the folder chmod u+x kubectl move this folder to user bin  sudo mv kubectl /usr/local/bin/ execute kubectl version to check client and server versions  See the compatible client and server GitVersions v1.25.2 check whether the cluster is running or not using - >minikube status Helm Installation Visit https:/...

How to Test Application Context in Spring Boot

 Usually, we won't bother about the Junit test for the application context and bean instantiations, blindly trust about the stability of the spring boot framework. And spring boot generates automatically a test class with a method contextLoads() @Test void contextLoads () { } To check the beans are launching properly or not we can frame springboot application like below  @SpringBootApplication public class PracticeApplication { public static void main (String[] args) { ConfigurableApplicationContext ctx=SpringApplication. run (PracticeApplication. class , args); printBeanNames (ctx); } Lets deal with the context object  private static void printBeanNames (ApplicationContext ctx) { String[] beanDefinitionNames = ctx.getBeanDefinitionNames(); for (String bean:beanDefinitionNames) System. out .println( "BeanNAme is " +bean); int beanDefinitionCount = ctx.getBeanDefinitionCount(); System. out .println( "beanDefinitionCount...

SOLID Principle (Quick Read)

Image
 The famous design principle comprises 5 design strategies that a developer should follow for a successful application development Single Responsibility Principle Open/ Close Principle Liskov Substitution Principle Interface Segregation Principle Dependency Inversion Single Responsibility Principle(SRP) No code unit(function/class/package) should have more than one responsibility.   Bad Ex:   Class Bird  {     String result;;    public fly(Bird bird)    {     if(bird==piegeon)       result="flys 20 meter hight";    else if(bird==hen)       result="flys 5 meter";    else        result="not measured yet";    } } This class should design such a way that it should work well for all the types of birds current design is hectic to maintain for the following reasons  difficult to test. difficult in parallel programming  understanding the code...

Microservice -Cloud Configurations -Spring

Image
 Let's explore the Microservice deployments ,registration using Service Registry and controlling the communication using APIGateways in next blog. Will create a microservice Library using H2 databass fro the persistance. The entire codebase will be shared in the git will share at the end of the blog. Let's create the Books microservices like below Controller  Services Repository  Entity   application.yaml server : port : 9000 Postman test output Create the Books microservices the same way. Refer to the code from the end. Mail goal is to access using the gateway service Big Step Going to add the microservices to the Service Registry to do that create one more microservices with the below dependency < dependencies > < dependency > < groupId >org.springframework.cloud</ groupId > < artifactId >spring-cloud-starter-netflix-eureka-server</ artifactId > </ dependency > < dependency > < groupId ...