| localLLM-package | R Interface to llama.cpp with Runtime Library Loading |
| ag_news_sample | AG News classification sample |
| apply_chat_template | Apply Chat Template to Format Conversations |
| apply_gemma_chat_template | Apply Gemma-Compatible Chat Template |
| backend_free | Free localLLM backend |
| backend_init | Initialize localLLM backend |
| context_create | Create Inference Context for Text Generation |
| detokenize | Convert Token IDs Back to Text |
| download_model | Download a model manually |
| generate | Generate Text Using Language Model Context |
| generate_parallel | Generate Text in Parallel for Multiple Prompts |
| get_lib_path | Get Backend Library Path |
| get_model_cache_dir | Get the model cache directory |
| install_localLLM | Install localLLM Backend Library |
| lib_is_installed | Check if Backend Library is Installed |
| list_cached_models | List cached models on disk |
| localLLM | R Interface to llama.cpp with Runtime Library Loading |
| model_load | Load Language Model with Automatic Download Support |
| quick_llama | Quick LLaMA Inference |
| quick_llama_reset | Reset quick_llama state |
| set_hf_token | Configure Hugging Face access token |
| smart_chat_template | Smart Chat Template Application |
| tokenize | Convert Text to Token IDs |
| tokenize_test | Test tokenize function (debugging) |