Sunday, September 25, 2011
New Home...
Monday, August 01, 2011
What Is Terracotta?
Monday, July 11, 2011
Easy Java Performance Tuning: Manage Objects By Size On The Java Runtime Heap
Sunday, May 22, 2011
Exciting Times... Terracotta and Software A.G.
Wednesday, April 20, 2011
Local Caching++
- 40 DB tables in Hibernate that can be cached
- A web cache
- A user session cache
- Hibernate/Lots of caches - When using Hibernate you often end up with as many as 100 tables in your DB. How do you balance a fixed amount of resources(Heap/BigMemory) across 100 caches?
- Indirect knobs/Bytes vs Count/TTL - In local Java caching the control points are almost always measured in number of entries and time to live. But wait a minute! When I start the JVM I don't say how many objects the heap can hold and for how long. I say how many bytes of memory the heap can use?
- Who Tunes and When? - At some companies the desire is to have the "Application Administrator" do the tuning. At others it's the "Developer." They have different understandings of the application. The developer can tune by knowledge of the application. The app admin can only tune based on what's happening when the application is running.
- Tune from the top - Define max resource usage for the whole cache manager and then optionally define it for the individual caches underneath it as needed. So if you have a hundred caches you can start with, "Give these 100 caches N amounts of Heap/OffHeap." Then monitor and see if any specific caches need special attention.
- Tune the constrained resource, Bytes - TTL is a cache freshness concern not a resource management concern. Max entry count does not directly map to available heap resources. So we are adding "bytes" based tuning. This eliminates the mistake prone process of trying to control resources by TTL/TTI/count and hope you get it right. Instead you say, "I want to allow caching to use 30 percent of heap." We take it from there.
Saturday, April 16, 2011
Where To Buy Your Apple Gear?
- The price tag of the device itself
- State sales tax
- Shipping and Handling
- What you can get thrown into the deal
Wednesday, April 13, 2011
Please Strengthen My Weak LinkedIn Links...
Thursday, March 24, 2011
"What" "When" and "Where" ... Quartz Scheduler 2.0 Goes GA
- Resource Availability - CPU available, Memory Available, custom constraints
- Ehcache's data locality - Bring the work to where the data is
- Static allocation - Just decide where it goes
- Easy to use Fluent API - Quartz 2.0 has a new, easy to use fluent interface that hides the complexity of building out the description of your jobs behind a simple description of what you want to happen and when. I wrote a short blog about this when it was in beta.
- Quartz "Where" - Constraint based system for controlling where jobs execute based on things like CPU and Memory usage, OS, and Ehcache data locality
- Quartz Manager - A flash based GUI console for managing and monitoring your scheduler in production.
- Batching - Helps improve a schedulers throughput by allowing one to make trade-offs between perfect time execution and benefiting from batching.
- Ton's of bug fixes and features - Lots of long requested features. Check out the link for the list.
Monday, February 14, 2011
Quick 5:41 Intro To Ehcache Search (Now GA)
- Search - Brand new search API. Allows one to get beyond the key based lookup of objects (Check out this sample)
- Local Transactions - Fast optimistic concurrency without the need for a TransactionManager (Check out this sample)
- Bigger BigMemory (ee) - 2 Billion entries, 1.3 million TPS, Extreme predictability for meeting SLA's
- Bigger Disk Store (ee) - Swap your Ehcache to disk. Grow to hundreds of gigs with no on heap footprint
Wednesday, January 26, 2011
Ehcache At 2 Billion...
What's Up With Ehcache 2.4
Ehcache is the de facto caching standard for Java that everyone uses (500,000+ production deployments; the majority of enterprise Java applications). Ehcache 2.4 is coming out soon and includes some capabilities that will make it even easier to use, more powerful, while still maintaining it's light weight.
The highlights include:
- Search - Quickly find entries based on the criteria of your choosing. String matching, dates, ranges, sums, averages etc.
- Fast local transactions - Improved performance of JTA and added a new non-jta transaction api for user level control
- Even more capacity and performance
What I've been Testing
I've written before about BigMemory for Enterprise Ehcache and how it solves the problem of long, unpredictable GC pauses in Java. The first release of BigMemory was… well, big. In Enterprise Ehcache 2.4, BigMemory has gotten even bigger.
Using the Enterprise Ehcache Big Memory Pounder I was able to show that Enterprise Ehcache 2.4 now easily handles:
- Entry Count: > 2 billion entries (I reached 2 billion on the hardware I had; with bigger hardware, I could probably have gone much higher).
- Throughput: 1.3 million operations per second (symmetric read and write; CPU bound)
- SLA/Predictability: No GC pauses and a predictable 38-42 ops/thread/millisecond throughout the test
- Data Size: 1-350 GB in-memory cache (again, I was limited by the hardware I had; with more RAM, I could probably have gone much higher)
- Flexible Efficient Entry Sizes: The cache can now dynamically handle very large (10-100 MB) and very small entries (just a few bytes) together more efficiently with no tuning (This test used small entries in order to fit as many entries as possible into the memory I had. I also ran tests with fewer entries in order to validate wide ranging sizes)
- Tuning: All tests were done with NO TUNING. Right out of the box.
Here's the hardware and software stack I used for my testing:
Cisco UCS C250 Server
Dual Intel x5670 2.93 Ghz CPU
384 GB RAM ( 8 GB x 48)
Redhat 5.4 Enterprise Edition
Sun JDK 1.6_22
For this test, all of the data was in memory.
A Bit About Ehcache BigMemory
BigMemory is 100% pure Java and in process with a Java application. No magic or special JVMs (works on IBM and JRocket as well). The cache data is safely hidden away from Java GC and the pauses that occur with large heaps by instead storing data in a BigMemory off-heap store.
Embedding
BigMemory got it's start as a component in the Terracotta Server Array and as a result it is particularly useful for embedding. It's performance characteristics and no tuning approach improves "The Out Of The Box Experience" and saves money on support by removing tuning required by users and problems caused by GC pauses.
You may be thinking...
"I don't have 2 billion entries in my caches?"
That's ok. Ehcache is a lightweight core library (under 1MB) for caching that's ubiquitous and easy to use. When it's needed, Ehcache lets you scale up and out to billions of entries and terabytes of data. It does so at a manageable server density without changing code/architecture and without a bunch of tuning and learning. This protects not only your knowledge investment but your code investment.
More about BigMemory for Enterprise Ehcache:
http://terracotta.org/bigmemory
More about the 2010 Ehcache user survey:
Ehcache User Survey Whitepaper