Analysis of LLM Performance on AWS Bedrock: Receipt-item Categorisation Case Study
Gabby Sanchez, Sneha Oommen, Cassandra T. Britto, Di Wang, Jung-De Chiou + 1 more
TLDR
This paper evaluates AWS Bedrock LLMs for receipt-item categorization, finding Claude 3.7 Sonnet offers the best balance of accuracy and cost.
Key contributions
- Evaluated four AWS Bedrock LLMs (Claude 3.7/4 Sonnet, Mixtral 8x7B, Mistral 7B) for receipt categorization.
- Assessed models on accuracy, response stability, and token-level cost efficiency.
- Compared zero-shot and few-shot prompting methods for performance and cost.
- Found Claude 3.7 Sonnet provides the optimal balance of classification accuracy and cost.
Why it matters
This study offers practical guidance for selecting cost-effective LLMs for production-oriented receipt categorization. It helps developers and businesses make informed decisions when deploying LLMs on AWS Bedrock, optimizing both performance and operational costs.
Original Abstract
This paper presents a systematic, cost-aware evaluation of large language models (LLMs) for receipt-item categorisation within a production-oriented classification framework. We compare four instruction-tuned models available through AWS Bedrock: Claude 3.7 Sonnet, Claude 4 Sonnet, Mixtral 8x7B Instruct, and Mistral 7B Instruct. The aim of the study was (1) to assess performance across accuracy, response stability, and token-level cost, and (2) to investigate what prompting methods, zero-shot or few-shot, are especially appropriate both in terms of accuracy and in terms of incurred costs. Results of our experiments demonstrated that Claude 3.7 Sonnet achieves the most favourable balance between classification accuracy and cost efficiency.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.