Recent Releases of llama_ros

llama_ros - 5.3.2

Changelog from version 5.3.1 to 5.3.2:

1c3ffcd new version 5.3.2 bc3453a new models GPT-OSS and MiniCPM-v4 241369f llama.cpp updated - removing patch - adding new param noextrabufts

- C++
Published by mgonzs13 7 months ago

llama_ros - 5.3.1

Changelog from version 5.3.0 to 5.3.1:

86a86c8 new version 5.3.1 5cda267 llama.cpp updated 0b1e16e adding logitbiaseog + fixing ignore_eos 3d80c2b minor style fixes adding this b875490 llama.cpp udpated

- C++
Published by mgonzs13 7 months ago

llama_ros - 5.3.0

Changelog from version 5.2.0 to 5.3.0:

3f52a86 new version 5.3.0 eb1ed97 pddl demo added 3b509d1 adding param stream_reasoning 1dfadcc llama.cpp updated 31c8ec7 Tool calling: streaming and reasoning (#32)

- C++
Published by mgonzs13 8 months ago

llama_ros - 5.2.0

Changelog from version 5.1.0 to 5.2.0:

2c9c555 new version 5.2.0 8b6689e adding prints to mtmd example 1c03401 llama.cpp updated 83314da hfhub.cpp updated 7617a50 Added a missing tag (#28) e68db43 fixing clearing mtmds 8dcec78 clearmtmds function added to llava 77823df minor fixes for audio bbc6817 multi audio demo added 4abd542 adding support for audio bbf80ee fixing embeddings ed8a215 fixing format + llama.cpp updated 2200eb2 fixing format + llama.cpp updated ef1b853 migrating to new memory functions 273f80c fixing reranking-pooling params 8e97310 Update llamacppvendor to latest version (#27) f1a4c4a Adding InternVL3 6f03016 adding support for rolling 668c7eb adding iron to README and llamabt 7eb9d23 testing iron workflows ce24c07 replacing rolling with kilted f8d9888 creating rolling workflows c134529 simplifying ros2 distros in cmakelists a923a10 use amenttargetdependencies for behaviortreecppv3 4e09104 Removed deprecated amentcmaketargetdependencies (#26)

- C++
Published by mgonzs13 8 months ago

llama_ros - 5.1.0

Changelog from version 5.0.2 to 5.1.0:

bdc7a1c new version 5.1.0 865ff64 New mtmd (#25) 3b6c4dc llama.cpp updated

- C++
Published by mgonzs13 9 months ago

llama_ros - 5.0.2

Changelog from version 5.0.1 to 5.0.2:

45986c8 new version 5.0.2 950ff26 fixing sampling params - removing unused params (penaltyprompttokens, usepenaltyprompttokens) - adding new topnsigma - using params in C++: minkeep, ignoreeos, dynatemprange, dynatempexponent, topnsigma, xtcprobability, xtcthreshold, drymultiplier, drybase, dryallowedlength, drypenaltylastn, drysequencebreakers 2080c12 llama.cpp updated 8a2baac adding Qwen3

- C++
Published by mgonzs13 10 months ago

llama_ros - 5.0.1

Changelog from version 5.0.0 to 5.0.1:

3e6b736 new version 5.0.1 a34c9c8 llama.cpp updated ef6678d hf_hub.cpp updated

- C++
Published by mgonzs13 10 months ago

llama_ros - 5.0.0

Changelog from version 4.5.0 to 5.0.0:

14b28db new version 5.0.0 0e5f6ee fixing release workflows 3eeeb5a llama.cpp updated fbcf3e7 replace size with empty in if 9adcab2 new gemma-3 model 9daf79f dont use hfhub if repo or file are empty 33de5fe adding C++ comments for Doxygen 8b69ed5 updating llama.cpp + renaming param model to modelpath be739be Bt chat completions (#24) 0dfa1c7 llama.cpp updated b1dcf2b fixing bt tests for jazzy ca1fd20 fixing bt tests f05428b jazzy/humble bt package 9d34176 llama.cpp updated 3235762 Comments in the messages and demo videos (#23) 14f5b63 hf_hub.cpp updated 1db6402 Chat completion fixes (#22) eca380b fixing chatllama structured output demo 806767b fixing chatllama demos 84f7772 fixing python demos 0573cc8 temporal fix for structured demo 85f926d updating requirements 2d2c4a2 adding gemma-3 and phi-4-mini 915ad0d New Chat Completions endpoint (#21)

- C++
Published by mgonzs13 11 months ago

llama_ros - 5.0.0

Changelog from version 4.5.0 to 5.0.0:

6f78d1a new version 5.0.0 d920a7e new version 5.0.0 300e6be new version 5.0.0 9e89349 new version 5.0.0 12a5bfd new version 5.0.0 8544068 new version 5.0.0 f5c160b testing created release trigger 232082b new version 5.0.0 32f262a new version 5.0.0 20e1c43 fixing release workflows to trigger on new releases 3eeeb5a llama.cpp updated fbcf3e7 replace size with empty in if 9adcab2 new gemma-3 model 9daf79f dont use hfhub if repo or file are empty 33de5fe adding C++ comments for Doxygen 8b69ed5 updating llama.cpp + renaming param model to modelpath be739be Bt chat completions (#24) 0dfa1c7 llama.cpp updated b1dcf2b fixing bt tests for jazzy ca1fd20 fixing bt tests f05428b jazzy/humble bt package 9d34176 llama.cpp updated 3235762 Comments in the messages and demo videos (#23) 14f5b63 hf_hub.cpp updated 1db6402 Chat completion fixes (#22) eca380b fixing chatllama structured output demo 806767b fixing chatllama demos 84f7772 fixing python demos 0573cc8 temporal fix for structured demo 85f926d updating requirements 2d2c4a2 adding gemma-3 and phi-4-mini 915ad0d New Chat Completions endpoint (#21)

- C++
Published by mgonzs13 11 months ago

llama_ros - 4.5.0

Changelog from version 4.4.1 to 4.5.0:

b47f1b3 new version 4.5.0 a3aeaef Hfhub cpp (#20) 0093120 new jazzy push workflow 63b940a new jazzy build workflow e021979 adding ament clang to llama_ros test

- C++
Published by github-actions[bot] 12 months ago

llama_ros - 4.4.1

Changelog from version 4.4.0 to 4.4.1:

cda255a new version 4.4.1 4288518 llama.cpp updated 1aa343b Patched clip to allow the usage of CUDA (#15) 0bdd98f llama.cpp updated 22829ac llama.cpp updated f22aeef llama.cpp updated a7b4b42 jinja2 upgraded a3e38b0 comments for new grammar sampling config

- C++
Published by github-actions[bot] about 1 year ago

llama_ros - 4.4.0

Changelog from version 4.3.1 to 4.4.0:

8a9fed5 new version 4.4.0 d1b6698 minor fix in requirements 03b51f7 fixing demos 79eb77d updating requirements 8a3e63e grammar lazy and trigger words added 4a12e72 MiniCPM-o added a6a1c73 struct commonchatmsg 134efc4 Add minja support (#13)

- C++
Published by github-actions[bot] about 1 year ago

llama_ros - 4.3.1

Changelog from version 4.3.0 to 4.3.1:

a930fa2 new version 4.3.1 71fcd83 llama.cpp updated 82aee46 fixing DeepSeek-R1 prompt ea2357a fixing find stop when maximum number of tokens 986b539 DeepSeek-R1 added 6882abc llama.cpp updated 34770dd unused Node import removed from README

- C++
Published by github-actions[bot] about 1 year ago

llama_ros - 4.3.0

Changelog from version 4.2.0 to 4.3.0:

6099e22 new version 4.3.0 877367e langgraph demo added 62c913b new tool formatting 4a34c4b llama.cpp updated e8a2b5d llama.cpp updated b74f152 llama.cpp updated + new vocab functions 0171812 usellamatemplate renamed to usedefaulttemplate d5f07be auto toolchoice 02d459b onlytool_calling febdd82 new tool grammar to output tool calls or text f2a307f jinja2 added to requirements

- C++
Published by github-actions[bot] about 1 year ago

llama_ros - 4.2.0

Changelog from version 4.1.8 to 4.2.0:

212f5f1 new version 4.2.0 40121ba new line removed from llava 46f3f05 new reranking and embeddings 1292f11 llama.cpp updated c68761a cleaning chat llama ros 7f9d184 minor fixes to goal in execute 43173bb not reset image in llava 490f4a0 minor fix in README b88d44b minor fixes d309f5c minor fix in llava 56cd9d0 llama.cpp updated 4f203c5 demos and examples fixed in README 8292411 fixing python imports in demos 566a736 fixing rag demo 8a01839 chatllamatoolsnode renamed to chatllamatoolsdemonode b8ed82a updating langchain versions 4abf2d0 sorting python imports bfa6e2a updating chroma version 8c191ee moving getmetada service - embedding and rerank models will not have get_metada service 51fff6c fixing rerank by setting nomalization to -1 2eab6d6 LangChain Tools on Chat (#12) b037ad3 llama.cpp updated 17b8719 phi-4 added d1a24c4 new embedding models

- C++
Published by github-actions[bot] about 1 year ago

llama_ros - 4.1.8

Changelog from version 4.1.7 to 4.1.8:

9bf2fce new version 4.1.8 0ee364f Qwen2-VL yaml updated ed22b00 llama.cpp updated e005735 llava override eval batch instead of vector 6262e24 Qwen2-VL support added 335250b frieren image for llava demo e261285 llama.cpp updated b9e5d18 llama.cpp updated

- C++
Published by github-actions[bot] about 1 year ago

llama_ros - 4.1.7

Changelog from version 4.1.6 to 4.1.7:

f0a0af9 new version 4.1.7 efd8c97 fixing license comments 8275fe4 ifndef guard names fixed eff180d new llama logs 60187e4 huggingface-hub upgraded to 0.27.0 44bbb71 Falcon3 example added 9c2ac6d llama.cpp updated beb4a22 llama.cpp updated 741b01e vendro C++ standard set to 17 39db082 llama.cpp updated + new penalize sampling 8bf89d9 Update README.md 2b9a0df workflow names fixed

- C++
Published by github-actions[bot] about 1 year ago

llama_ros - 4.1.6

Changelog from version 4.1.5 to 4.1.6:

379e971 new version 4.1.6 b2e0997 close inactive issues workflow a1db9de updating workflows to use permissions 3679ba1 llama.cpp updated + new kv_cache types f977a38 llama.cpp updated

- C++
Published by github-actions[bot] about 1 year ago

llama_ros - 4.1.5

Changelog from version 4.1.4 to 4.1.5:

701dda1 new version 4.1.5 460bfaf license comments fixed f32182d llama_params ifndef fixed 868c70c llama.cpp updated efad6bc Mistral model added 59957a0 Hermes model updated edd93d3 llama.cpp updated 1106ec7 cron added to docker build workflow

- C++
Published by mgonzs13 about 1 year ago

llama_ros - 4.1.4

Changelog from version 4.1.3 to 4.1.4:

893ae80 new version 4.1.4 7374334 fixing logs be3ad64 llama.cpp updated 39f5283 llama.cpp updated + ggml_amx removed

- C++
Published by mgonzs13 about 1 year ago

llama_ros - 4.1.3

Changelog from version 4.1.2 to 4.1.3:

5212102 new version 4.1.3 a729e44 new workflow to create releases d0294ea removig wrong option for CURL 0fe12b0 llama.cpp updated c86501b debug param removed 4ca5739 llama.cpp updated 3e770cf devices param added 879a1ab llama.cpp updated + new params sampling

- C++
Published by mgonzs13 about 1 year ago

llama_ros -

  • Improving includes order
  • Remove bos from llama-3 prompt
  • llama.cpp b4157

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • Getting metadata from GGUF model through llama.cpp
  • New metadata msgs (Metadata, GeneralInfo, ModelInfo, AttentionInfo, RoPEInfo, TokenizerInfo)
  • New service to get the metadata of the LLM/VLM
  • llama.cpp b4149

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • new github actions (docker push, doxygen)
  • llama.cpp b4050

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • github action for jazzy
  • SmolLM2 model added
  • requirements removed (lark, packaging)
  • llama.cpp b4011

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • fixing llama_rag demo
  • Tail-Free sampling removed
  • signal removed from mains
  • llama.cpp b3995

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • DRY sampling
  • llama.cpp b3982

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • Dockerfile created
  • CI github actions for formatter and docker build
  • python formatted with black
  • llama.cpp b3974

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • lark added to requirements
  • normalization types for embeddings
  • llama.cpp b3962

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • vendor CMakeLists fixed
  • llama.cpp b3933

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • new XTC sampling added
  • new system_prompt param
  • llama.cpp b3923

This version does not compile due to errors in the vendor CMakeLists

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • common prefix added for llama.cpp commons
  • llama.cpp b3906

This version does not compile due to errors in the vendor CMakeLists

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • llamaragdemo fixed
  • llama.cpp b3889

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • reranking added
  • separate LLM, embedding models and reranking models
  • new services (reranking and detokenize)
  • models for reranking and embeddings added
  • vicuna promopt added
  • llama namespace removed from LlamaClientNode
  • full demo with LLM + chat template + RAG + reranking + stream
  • README:
    • model shards example added
    • reranking langchain and demo added
    • embedding demo added
    • minor fixes
  • langchain reranking added
  • langchain upgraded to 0.3
  • llama.cpp b3870

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • chatllamaros added to README
  • model shard files download added
  • llama.cpp b3827

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • qwen2 updated to qwen2.5
  • llama.cpp b3799

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • new sampling from llama.cpp
  • grammar functions removed
  • n_remain removed
  • threadpool added
  • llama.cpp b3756

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • fixed stop when n_remain is 0
  • llama.cpp updated

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • ChatLlamaROS stream fix
  • ChatLlamaROS demo video added
  • Fix passing image as data

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • llama.cpp updated
  • new cpuparams

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • service to list LoRAs added
  • service to modify LoRAs scale added
  • Qwen2 added
  • Phi-3 repos fixed
  • llama.cpp updated

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • Format chat messages service create
  • llamaroscommon for langchain integration
  • Langchain Chat integration
  • new chatllama_demo create
  • nthreadsbatch default set to 1
  • llama.cpp update

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • Llama and Llava destructors fixed
  • gptparams renamed to llamaparams
  • params are now structs managed by functions instead of a class
  • llama.cpp updated

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • llama.cpp
  • minor fix to params
  • phi-3-adapter updated to 3.5

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • embedding set to false by default
  • fixes to readme
  • phi-3 updated to 3.5

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • Life cycle nodes for LlamaNode and LlavaNode

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • llama.cpp updated
  • MiniCPM 2.6 added

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • llava namespace removed
  • llama_cli prompt fixed

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • llama.cpp updated
  • shouldaddbostoken --> addbos_token

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • llava params (image text, image prefix, image suffix)
  • llava added to llama_cli
  • MiniCPM 2_5 added
  • llama.cpp updated

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • lora path option added
  • llama.cpp updated

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • llama.cpp updated
  • multiple LoRA adapters feature added
  • Phi-3 multi-adapter example added

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • llama.cpp updated
  • llama init returns struct instead of tuple

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

Gemma 2 added

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • ggml renamed to llama_ggml
  • This avoids mixing libllama.so libs from whisper.cpp and llama.cpp

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • Jazzy Support added
  • llama_cli setup.py fixed
  • llama.cpp updated

- C++
Published by mgonzs13 over 1 year ago

llama_ros - 3.0.1

  • llama.cpp updated
  • llama-3 updated to 3.1
  • lorabase replaced by loraadapter

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

Packages versions set and new packages for llamaros - llamacppvendor: vendor package to download and build llama.cpp - llamademos: packages that contain the original demos from llama_ros

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

  • stop added to GenerateResponse action
  • LangChain wrapper supports VLMs
  • llama.cpp

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

Stream feature added to llamaclientnode and LangChain

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

llama.cpp updated (new detokenize function)

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

llama.cpp updated

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

llama.cpp updated and minor fixes for llama_cli

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

llama_cli - launch: command to launch LLMs - prompt: command to generate responses using a prompt

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

llama.cpp updated new localmentor repo

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

llama.cpp updated

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

llama.cpp + new GGMLCUDA LLAMACUDA env var to set if llama.cpp is compiled with CUDA

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

llama.cpp update + more llava yaml

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

params and prompt files renamed

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

New bringup format using YAML files with params

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

llama.cpp updated

- C++
Published by mgonzs13 over 1 year ago

llama_ros -

llama.cpp updated

- C++
Published by mgonzs13 almost 2 years ago

llama_ros -

llama.cpp updated

- C++
Published by mgonzs13 almost 2 years ago

llama_ros - 2.2.2

stopping words fixed, replacing "\n" by "\n"

- C++
Published by mgonzs13 almost 2 years ago

llama_ros - 2.2.1

stopping_words param

- C++
Published by mgonzs13 almost 2 years ago

llama_ros - 2.2.0

New params and configurations for createllamalaunch:

  • system_prompt
  • systempromptfile
  • systemprompttype
  • model
  • lora_base
  • mmproj

- C++
Published by mgonzs13 almost 2 years ago

llama_ros -

llama3

- C++
Published by mgonzs13 almost 2 years ago

llama_ros -

llama.cpp updated

- C++
Published by mgonzs13 almost 2 years ago

llama_ros -

simple_node removed

- C++
Published by mgonzs13 almost 2 years ago

llama_ros -

- C++
Published by mgonzs13 almost 2 years ago

llama_ros -

New json-schema-to-grammar from llama.cpp

- C++
Published by mgonzs13 almost 2 years ago

llama_ros -

LLaVA --> llava_ros

- C++
Published by mgonzs13 almost 2 years ago

llama_ros -

Schema Converter: create JSON BNF grammars from JSON/dict

- C++
Published by mgonzs13 almost 2 years ago

llama_ros -

  • llama.cpp updated
  • mulmatq removed
  • GGMLUSECUBLAS removed
  • pooling_type param

- C++
Published by mgonzs13 almost 2 years ago

llama_ros -

llama.cpp updated

- C++
Published by mgonzs13 about 2 years ago

llama_ros -

llama.cpp updated

- C++
Published by mgonzs13 about 2 years ago

llama_ros -

multi-option params fixed

- C++
Published by mgonzs13 about 2 years ago

llama_ros -

llama.cpp and readme updated

- C++
Published by mgonzs13 about 2 years ago

llama_ros -

llama.cpp updated

- C++
Published by mgonzs13 about 2 years ago

llama_ros -

New version

  • Generate response cleaned
  • Generate embeddings fixed
  • Batch used to generate text
  • llama.cpp updated

- C++
Published by mgonzs13 about 2 years ago

llama_ros -

- C++
Published by mgonzs13 about 2 years ago

llama_ros -

- C++
Published by mgonzs13 about 2 years ago

llama_ros -

- C++
Published by mgonzs13 about 2 years ago

llama_ros -

Logs fixed in llama.cpp and more params added

- C++
Published by mgonzs13 about 2 years ago

llama_ros -

- C++
Published by mgonzs13 about 2 years ago

llama_ros -

- C++
Published by mgonzs13 about 2 years ago

llama_ros -

- C++
Published by mgonzs13 about 2 years ago

llama_ros -

- C++
Published by mgonzs13 about 2 years ago