HIGHLIGHTS
- What: The authors propose BERTouch BERT-based language model fine-tuned on intent detection in Darija which outperforms state-of-the-art models including OpenAI`s GPT-4 achieving F1 scores of 0 98 and 0 96 on both Darija and MSA respectively. The authors discuss the tradeoff between the use of generalist LLM and cross-lingual transfer learning, and the authors highlight the importance of domain-specific data annotation. In the analysis , the authors examine the tradeoff between the precision of specialized classifiers and the cost-effectiveness of retrieval-based methodologies. The authors provide a brief . . .

If you want to have access to all the content you need to log in!
Thanks :)
If you don't have an account, you can create one here.