Updated 6 months ago

mlora-cli • Rank 11.3 • Science 64%

An Efficient "Factory" to Build Multiple LoRA Adapters

Updated 6 months ago

loraenergysim • Rank 4.4 • Science 67%

LoRa Network Simulator to Monitor Energy Consumption

Updated 6 months ago

sorsa • Rank 6.2 • Science 54%

SORSA: Singular Values and Orthonormal Regularized Singular Vectors Adaptation of Large Language Models

Updated 6 months ago

chinese-llama-alpaca • Rank 12.2 • Science 46%

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)

Updated 6 months ago

lotr • Rank 3.4 • Science 54%

Low Tensor Rank adaptation of large language models

Updated 5 months ago

https://github.com/assert-kth/repairllama • Rank 5.5 • Science 46%

RepairLLaMA: Efficient Representations and Fine-Tuned Adapters for Program Repair http://arxiv.org/pdf/2312.15698

Updated 6 months ago

finetuned-qlora-falcon7b-medical • Rank 6.3 • Science 44%

Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset

Updated 6 months ago

gr-lora_sdr • Science 54%

This is the fully-functional GNU Radio software-defined radio (SDR) implementation of a LoRa transceiver with all the necessary receiver components to operate correctly even at very low SNRs. This work has been conducted at the Telecommunication Circuits Laboratory, EPFL.

Updated 6 months ago

indic-llm • Science 44%

A open-source framework designed to adapt pre-trained Language Models (LLMs), such as Llama, Mistral, and Mixtral, to a wide array of domains and languages.

Updated 5 months ago

https://github.com/buaadreamer/mllm-finetuning-demo • Science 13%

使用LLaMA-Factory微调多模态大语言模型的示例代码 Demo of Finetuning Multimodal LLM with LLaMA-Factory

Updated 6 months ago

lora • Science 36%

Using Low-rank adaptation to quickly fine-tune diffusion models.