runanywhere_llamacpp 0.15.8 copy "runanywhere_llamacpp: ^0.15.8" to clipboard
runanywhere_llamacpp: ^0.15.8 copied to clipboard

LlamaCpp backend for RunAnywhere Flutter SDK. High-performance on-device LLM text generation with GGUF model support.

Changelog #

All notable changes to the RunAnywhere LlamaCpp Backend will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

0.15.8 - 2025-01-10 #

Added #

  • Initial public release on pub.dev
  • LlamaCpp integration for on-device LLM inference
  • GGUF model format support
  • Streaming text generation
  • Memory-efficient model loading
  • Native bindings for iOS and Android

Features #

  • High-performance text generation
  • Token-by-token streaming output
  • Configurable generation parameters (temperature, max tokens, etc.)
  • Automatic model management and caching

Platforms #

  • iOS 13.0+ support
  • Android API 24+ support
0
likes
0
points
156
downloads

Publisher

unverified uploader

Weekly Downloads

LlamaCpp backend for RunAnywhere Flutter SDK. High-performance on-device LLM text generation with GGUF model support.

Homepage
Repository (GitHub)
View/report issues

Topics

#ai #llm #llama #text-generation #on-device

License

unknown (license)

Dependencies

ffi, flutter, runanywhere

More

Packages that depend on runanywhere_llamacpp

Packages that implement runanywhere_llamacpp