Skip to content
Provenance Brief
Research

Academic or research source. Check the methodology, sample size, and whether it's been replicated.

Disentangling Task Conflicts in Multi-Task LoRA via Orthogonal Gradient Projection

Multi-Task Learning (MTL) combined with Low-Rank Adaptation (LoRA) has emerged as a promising direction for parameter-efficient deployment of Large Language Models (LLMs).

Read Original

Disentangling Task Conflicts in Multi-Task LoRA via Orthogonal Gradient Projection

TLDR

Multi-Task Learning (MTL) combined with Low-Rank Adaptation (LoRA) has emerged as a promising direction for parameter-efficient deployment of Large Language Models (LLMs).

Artifacts
Paper PDF
Open
O open S save B back M mode