super_cache 1.0.1
super_cache: ^1.0.1 copied to clipboard
Production-ready LRU + TTL cache for Flutter. In-memory, encrypted, and disk layers with stampede protection, repository integration, and live cache metrics.
super_cache #
Production-ready LRU + TTL cache for Flutter and Dart. Zero external dependencies. Add caching to any repository in 3 lines. Prevent duplicate API calls. Observe live hit/miss metrics. Optionally extend with disk persistence or AES-256-GCM encryption.
class ProductRepository with CacheRepositoryMixin<String, Product> {
@override
final cache = MemoryCache<String, Product>(
maxEntries: 200,
defaultTTL: const Duration(minutes: 5),
);
Future<Product?> getProduct(String id) =>
fetchWithCache(id, onMiss: () => api.fetchProduct(id));
}
One mixin. One method. You get stampede protection, O(1) LRU eviction, TTL expiry, and live metrics — automatically.
Installation #
dependencies:
super_cache: ^1.0.0 # this package — zero external dependencies
super_cache_disk: ^1.0.0 # optional: persistent disk layer → pub.dev/packages/super_cache_disk
super_cache_secure: ^1.0.0 # optional: AES-256-GCM encrypted layer → pub.dev/packages/super_cache_secure
dev_dependencies:
super_cache_testing: ^1.0.0 # FakeCache + ManualClock → pub.dev/packages/super_cache_testing
Quick start #
In-memory LRU cache #
import 'package:super_cache/super_cache.dart';
final cache = MemoryCache<String, User>(
maxEntries: 500,
defaultTTL: const Duration(minutes: 10),
);
cache.put('user_1', user);
final u = cache.get('user_1'); // synchronous — no await needed
await cache.dispose();
Repository integration with stampede protection #
class UserRepository with CacheRepositoryMixin<String, User> {
@override
final Cache<String, User> cache = MemoryCache(maxEntries: 200);
Future<User?> getUser(String id) => fetchWithCache(
id,
policy: const CacheAside(ttl: Duration(minutes: 5)),
onMiss: () => _api.fetchUser(id),
);
}
If two widgets call getUser('u_1') simultaneously while the cache is cold, exactly one network request fires. The second caller awaits the same Future.
Layered cache (Memory + Disk) #
import 'package:super_cache/super_cache.dart';
import 'package:super_cache_disk/super_cache_disk.dart';
final disk = DiskCache<String, Product>(
directory: await getApplicationCacheDirectory(),
codec: JsonCacheCodec(
fromJson: Product.fromJson,
toJson: (p) => p.toJson(),
),
defaultTTL: const Duration(hours: 24),
);
await disk.initialize();
final cache = CacheOrchestrator<String, Product>(
l1: MemoryCache(maxEntries: 200),
l3: disk,
);
// First access: L1 miss → L3 hit → promotes to L1
// Second access: L1 hit — no disk I/O
Features #
| Feature | Details |
|---|---|
| O(1) LRU | HashMap + manual doubly-linked list — faster than LinkedHashMap delete-reinsert |
| TTL | Absolute or sliding; lazy expiry on get + background sweep timer |
| Stampede protection | In-flight map — concurrent misses for the same key produce one onMiss call |
| Layered caching | CacheOrchestrator wires L1 → L2 → L3 with automatic read-promotion |
| Cache policies | CacheAside, WriteThrough, RefreshAhead |
| Live metrics | metrics snapshot + metricsStream broadcast — hit rate, evictions, entry count |
| Memory pressure | evictFraction() wired to WidgetsBindingObserver.didHaveMemoryPressure |
FutureOr<V?> API |
Memory hits are synchronous — no event-loop yield |
| Byte-based eviction | maxBytes + sizeEstimator — evict by size, not just count |
Cache policies #
// CacheAside (default): lazy read-through — check cache, fetch on miss
fetchWithCache(id, policy: const CacheAside(ttl: Duration(minutes: 5)), onMiss: …)
// WriteThrough: always fetch fresh, then cache the result
fetchWithCache(id, policy: const WriteThrough(), onMiss: …)
// RefreshAhead: serve current value, refresh silently in background
fetchWithCache(id,
policy: const RefreshAhead(
refreshAfter: Duration(minutes: 4),
ttl: Duration(minutes: 6),
),
onMiss: …,
)
TTL modes #
// Absolute TTL (default): entry expires X seconds after put()
MemoryCache<String, String>(defaultTTL: const Duration(minutes: 10))
// Sliding TTL: TTL resets on every successful get()
MemoryCache<String, String>(
defaultTTL: const Duration(minutes: 10),
ttlMode: TTLMode.sliding,
)
Observability #
// One-shot snapshot
final m = cache.metrics;
print('Hit rate: ${(m.hitRate * 100).toStringAsFixed(1)}%');
print('Entries: ${m.currentEntries}');
print('Evictions: ${m.evictions}');
// Live broadcast stream — fires on every sweep interval
cache.metricsStream.listen((m) {
analytics.record('cache_hit_rate', m.hitRate);
});
Memory pressure (Flutter) #
import 'package:super_cache/super_cache_flutter.dart';
final cache = MemoryCache<String, Uint8List>(maxEntries: 500);
final watcher = MemoryCachePressureWatcher(cache: cache);
// Registered automatically — call watcher.dispose() when done.
On moderate pressure: evicts the LRU 20% of entries. On critical pressure: evicts the LRU 50%.
Testing #
import 'package:super_cache_testing/super_cache_testing.dart';
test('entry expires after TTL', () {
final clock = ManualClock();
final cache = FakeCache<String, int>(clock: clock);
cache.put('score', 42, ttl: const Duration(minutes: 5));
expect(cache.get('score'), 42);
clock.advance(const Duration(minutes: 6)); // jump time — no Future.delayed
expect(cache.get('score'), isNull);
});
The Cache<K, V> interface #
All implementations — including FakeCache — satisfy one interface:
abstract interface class Cache<K, V> {
FutureOr<V?> get(K key);
FutureOr<CacheResult<V>> getResult(K key); // hit / stale / miss
FutureOr<void> put(K key, V value, {Duration? ttl});
FutureOr<void> putAll(Map<K, V> entries, {Duration? ttl});
FutureOr<void> remove(K key);
FutureOr<void> removeWhere(bool Function(K, V) test);
FutureOr<void> clear();
FutureOr<bool> containsKey(K key);
CacheMetrics get metrics;
Stream<CacheMetrics> get metricsStream;
Future<void> dispose();
}
When not to use #
| Situation | Better alternative |
|---|---|
| 1–5 cached values with no eviction needs | Plain Map |
| Multi-isolate shared cache | Add an isolate bridge on top of DiskCache |
| Blobs > 50 MB or > 50k entries | Dedicated local database |
| Hardware secure enclave required | Platform-specific secure storage |
Package family #
| Package | Purpose |
|---|---|
super_cache |
Core LRU engine, orchestrator, mixin (this package) |
super_cache_secure |
AES-256-GCM encrypted in-memory cache |
super_cache_disk |
File-per-entry persistent cache with SHA-256 integrity |
super_cache_testing |
FakeCache + ManualClock for unit tests |
License #
MIT · Published by jihedmrouki.com