There are multiple ways to weave profiling data with distributed tracing in #OpenTelemetry. Elastic's inferred spans feature is now available with OpenTelemetry Java. Learn how to enrich distributed traces with inferred spans form in-process profiling data - revealing latency gaps within your tracing data. Try out that new feature with OTel and your Java application! https://lnkd.in/enXugjwf #Observability
Alexander Wert’s Post
More Relevant Posts
-
Ever want to know how to trace in those "black holes" of code where you can't necessarily trace. Elastic we've added inferred spans to OpenTelemetry Java SDK. https://lnkd.in/gu3ZqNgb #opentelemetry, #java, #APM, #observability
Revealing unknowns in your tracing data with inferred spans in OpenTelemetry
elastic.co
To view or add a comment, sign in
-
Software Engineer 2 @ Intuit • xSwiggy • Passionate about tech and helping others learn! Sharing bite-sized tech insights.
Have you ever encountered a situation where your program needs to process messages from a queue, but you don't want it to get overwhelmed? Once I came across this kind of challenge that made me explore a solution in Java, which is what I am sharing here today. In Java, we usually make use of ThreadPoolExecutor to manage a pool of worker threads that process these messages. However, by default, the ThreadPoolExecutor stores messages in its internal queue before a thread becomes available overwhelming the memory of the processing service if the processing rate is not able to keep up with the consumption rate. Here are a couple of solutions I came up with: 1. Semaphores and Bounded Thread Pools: Imagine giving each worker thread a semaphore. The thread can only process a message if it has a non-acquired semaphore. If all the semaphores are acquired, the task submit flow will be blocked, blocking the consumption thread as well, thus limiting the rate of consumption. Here is one simple implementation I found on the internet, https://t.ly/ycrKc. 2. Callers Run Policy: When the thread pool is full and all worker threads are occupied, we can tell the program to instruct the caller thread to execute the task. This can be done by setting the internal queue size to 0 and using a special handler (ThreadPoolExecutor.CallerRunsPolicy). This effectively pauses message consumption until a thread becomes free in the consumption thread pool. However, it's important to note that the consumer thread might get blocked, potentially affecting its ability to fetch new messages. There might be more ways of handling this. The client-side libraries of well-known Message Queues like Apache Kafka, Pulsar, and RabbitMQ might have inbuilt support for the same. However, these can be used for the generic rate limiting at the ThreadPoolExecutor level.
To view or add a comment, sign in
-
Using Spring AI with PostgreSQL pgvector to build generative AI apps in Java that..scale and never fail. This new hands-on tutorial is for Java developers eager to learn how to: * Get started with PostgreSQL pgvector and the Spring AI Embedding Client * Optimize vector search with the HNSW index * Scale applications using distributed PostgreSQL (Yugabyte) Enjoy!
Spring AI With PostgreSQL pgvector: Building Generative AI Apps in Java
https://www.youtube.com/
To view or add a comment, sign in
-
FYI, brief note on decimal ↔ binary Conversion issues added to java.lang.Double in JDK 22: https://lnkd.in/gJM-AS48 #OpenJDK
longBitsToDouble
download.java.net
To view or add a comment, sign in
-
Just finished Java: Data Structures by Bethan Palmer. Check it out: https://lnkd.in/duAZs8Ub #datastructures #java
Certificate of Completion
linkedin.com
To view or add a comment, sign in