Model Insights

gemini-1.5-pro-001

Details

Developer

Google

License

NA (private model)

Model parameters

NA (private model)

Supported context length

1000k

Price for prompt token

$3.5/Million tokens

Price for response token

$10.5/Million tokens

Model Performance Across Task-Types

Chainpoll Score

Short Context

0.95

Medium Context

1

Long Context

1

Model Insights Across Task-Types

Digging deeper, here’s a look how gemini-1.5-pro-001 performed across specific datasets

Short Context RAG

Medium Context RAG

This heatmap indicates the model's success in recalling information at different locations in the context. Green signifies success, while red indicates failure.

gemini-1.5-pro-001

Long Context RAG

This heatmap indicates the model's success in recalling information at different locations in the context. Green signifies success, while red indicates failure.

gemini-1.5-pro-001

Performance Summary

TasksTask insightCost insightDatasetContext adherenceAvg response length
Short context RAGThe model demonstrates exceptional reasoning and comprehension skills, excelling at short context RAG. It shows good mathematical proficiency, as evidenced by its performance on DROP and ConvFinQA benchmarks.It is a great model only slightly behind Sonnet 3.5 and nearly similar pricing. If cost is your concern its better to try out Gemini-1.5-Pro or Llama-3-70b.Drop
0.93
309
Hotpot
0.95
309
MS Marco
0.93
309
ConvFinQA
0.98
309
Medium context RAGFlawless performance making it suitable for any context length upto 25000 tokens.Great performance but we recommed using 30x cheaper Gemini Flash.Medium context RAG
1.00
309
Long context RAGFlawless performance making it suitable for any context length upto 100000 tokens.Great performance and you can use it. Alternatively you can try Claude 3.5 Sonnet which is in the similar range for more complicated task.Long context RAG
1.00
309

Read the full report