📊 Showing 6 results | 📏 Metric: Overall
Rank | Model | Paper | Overall | Date | Code |
---|---|---|---|---|---|
1 | cpt-code S | Text and Code Embeddings by Contrastive Pre-Training | 97.70 | 2022-01-24 | 📦 openmatch/coco-dr |
2 | cpt-code M | Text and Code Embeddings by Contrastive Pre-Training | 97.50 | 2022-01-24 | 📦 openmatch/coco-dr |
3 | CodeT5+ 770M | CodeT5+: Open Code Large Language Models for Code Understanding and Generation | 92.70 | 2023-05-13 | 📦 salesforce/codet5 📦 leiluk1/codesearcher |
4 | CodeT5+ 220M | CodeT5+: Open Code Large Language Models for Code Understanding and Generation | 92.40 | 2023-05-13 | 📦 salesforce/codet5 📦 leiluk1/codesearcher |
5 | GraphCodeBERT | GraphCodeBERT: Pre-training Code Representations with Data Flow | 84.10 | 2020-09-17 | 📦 microsoft/CodeBERT |
6 | CodeBERT | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | 69.30 | 2020-02-19 | 📦 salesforce/codet5 📦 microsoft/CodeBERT 📦 graykode/commit-autosuggestions |