The post GitHub Copilot Enhances Code Search with New Embedding Model appeared on BitcoinEthereumNews.com. Ted Hisokawa Sep 26, 2025 03:41 GitHub introduces a new Copilot embedding model, enhancing code search in VS Code with improved accuracy and efficiency, according to GitHub’s announcement. GitHub has announced a significant upgrade to its Copilot tool, introducing a new embedding model that promises to enhance code search within Visual Studio Code (VS Code). This development aims to make code retrieval faster, more memory-efficient, and significantly more accurate, as detailed in a recent GitHub blog post. Enhanced Code Retrieval The new Copilot embedding model brings a 37.6% improvement in retrieval quality, doubling the throughput and reducing the index size by eight times. This means developers can expect more accurate code suggestions, faster response times, and reduced memory usage in VS Code. The model effectively provides the correct code snippets needed, minimizing irrelevant results. Why the Upgrade Matters Efficient code search is crucial for a seamless AI coding experience. Embeddings, which are vector representations, play a key role in retrieving semantically relevant code and natural language content. The improved embeddings result in higher retrieval quality, thereby enhancing the overall GitHub Copilot experience. Technical Improvements GitHub has trained and deployed this new model specifically for code and documentation, enhancing context retrieval for various Copilot modes. The update has shown significant improvements, with C# developers experiencing a 110.7% increase in code acceptance ratios and Java developers seeing a 113.1% rise. Training and Evaluation The model was optimized using contrastive learning techniques, such as InfoNCE loss and Matryoshka Representation Learning, to improve retrieval quality. A key aspect of the training involved using ‘hard negatives’—code examples that appear correct but are not—helping the model distinguish between nearly correct and actually correct code snippets. Future Prospects GitHub plans to expand its training and evaluation data to include… The post GitHub Copilot Enhances Code Search with New Embedding Model appeared on BitcoinEthereumNews.com. Ted Hisokawa Sep 26, 2025 03:41 GitHub introduces a new Copilot embedding model, enhancing code search in VS Code with improved accuracy and efficiency, according to GitHub’s announcement. GitHub has announced a significant upgrade to its Copilot tool, introducing a new embedding model that promises to enhance code search within Visual Studio Code (VS Code). This development aims to make code retrieval faster, more memory-efficient, and significantly more accurate, as detailed in a recent GitHub blog post. Enhanced Code Retrieval The new Copilot embedding model brings a 37.6% improvement in retrieval quality, doubling the throughput and reducing the index size by eight times. This means developers can expect more accurate code suggestions, faster response times, and reduced memory usage in VS Code. The model effectively provides the correct code snippets needed, minimizing irrelevant results. Why the Upgrade Matters Efficient code search is crucial for a seamless AI coding experience. Embeddings, which are vector representations, play a key role in retrieving semantically relevant code and natural language content. The improved embeddings result in higher retrieval quality, thereby enhancing the overall GitHub Copilot experience. Technical Improvements GitHub has trained and deployed this new model specifically for code and documentation, enhancing context retrieval for various Copilot modes. The update has shown significant improvements, with C# developers experiencing a 110.7% increase in code acceptance ratios and Java developers seeing a 113.1% rise. Training and Evaluation The model was optimized using contrastive learning techniques, such as InfoNCE loss and Matryoshka Representation Learning, to improve retrieval quality. A key aspect of the training involved using ‘hard negatives’—code examples that appear correct but are not—helping the model distinguish between nearly correct and actually correct code snippets. Future Prospects GitHub plans to expand its training and evaluation data to include…

GitHub Copilot Enhances Code Search with New Embedding Model

2025/09/27 19:29


Ted Hisokawa
Sep 26, 2025 03:41

GitHub introduces a new Copilot embedding model, enhancing code search in VS Code with improved accuracy and efficiency, according to GitHub’s announcement.





GitHub has announced a significant upgrade to its Copilot tool, introducing a new embedding model that promises to enhance code search within Visual Studio Code (VS Code). This development aims to make code retrieval faster, more memory-efficient, and significantly more accurate, as detailed in a recent GitHub blog post.

Enhanced Code Retrieval

The new Copilot embedding model brings a 37.6% improvement in retrieval quality, doubling the throughput and reducing the index size by eight times. This means developers can expect more accurate code suggestions, faster response times, and reduced memory usage in VS Code. The model effectively provides the correct code snippets needed, minimizing irrelevant results.

Why the Upgrade Matters

Efficient code search is crucial for a seamless AI coding experience. Embeddings, which are vector representations, play a key role in retrieving semantically relevant code and natural language content. The improved embeddings result in higher retrieval quality, thereby enhancing the overall GitHub Copilot experience.

Technical Improvements

GitHub has trained and deployed this new model specifically for code and documentation, enhancing context retrieval for various Copilot modes. The update has shown significant improvements, with C# developers experiencing a 110.7% increase in code acceptance ratios and Java developers seeing a 113.1% rise.

Training and Evaluation

The model was optimized using contrastive learning techniques, such as InfoNCE loss and Matryoshka Representation Learning, to improve retrieval quality. A key aspect of the training involved using ‘hard negatives’—code examples that appear correct but are not—helping the model distinguish between nearly correct and actually correct code snippets.

Future Prospects

GitHub plans to expand its training and evaluation data to include more languages and repositories. The company is also refining its hard negative mining pipeline to enhance quality further, with goals to deploy larger, more accurate models leveraging the efficiency gains from this update.

This latest enhancement is a step towards making AI coding assistants more reliable and efficient for developers, promising a smarter and more dependable tool for everyday development.

Image source: Shutterstock


Source: https://blockchain.news/news/github-copilot-enhances-code-search-with-new-embedding-model

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Crypto-Fueled Rekt Drinks Sells 1 Millionth Can Amid MoonPay Collab

Crypto-Fueled Rekt Drinks Sells 1 Millionth Can Amid MoonPay Collab

The post Crypto-Fueled Rekt Drinks Sells 1 Millionth Can Amid MoonPay Collab appeared on BitcoinEthereumNews.com. In brief Rekt Brands sold its 1 millionth can of its Rekt Drinks flavored sparkling water. The Web3 firm collaborated with payments infrastructure company MoonPay on a peach-raspberry flavor called “Moon Crush.” Rekt incentivizes purchasers of its drinks with the REKT token, which hit an all-time high market cap of $583 million in August. Web3 consumer firm Rekt Brands sold its 1 millionth can of its Rekt Drinks sparkling water on Friday, surpassing its first major milestone with the sold-out drop of its “Moon Crush” flavor—a peach raspberry-flavored collaboration with payments infrastructure firm MoonPay.  The sale follows Rekt’s previous sellout collaborations with leading Web3 brands like Solana DeFi protocol Jupiter, Ethereum layer-2 network Abstract, and Coinbase’s layer-2 network, Base. Rekt has already worked with a number of crypto-native brands, but says it has been choosy when cultivating collabs. “We have received a large amount of incoming enquiries from some of crypto’s biggest brands, but it’s super important for us to be selective in order to maintain the premium feel of Rekt,” Rekt Brands co-founder and CEO Ovie Faruq told Decrypt.  (Disclosure: Ovie Faruq’s Canary Labs is an investor in DASTAN, the parent company of Decrypt.) “We look to work with brands who are able to form partnerships that we feel are truly strategic to Rekt’s goal of becoming one of the largest global beverage brands,” he added. In particular, Faruq highlighted MoonPay’s role as a “gateway” between non-crypto and crypto users as a reason the collaboration made “perfect sense.”  “We’re thrilled to bring something to life that is both delicious and deeply connected to the crypto community,” MoonPay President Keith Grossman told Decrypt.  Rekt Brands has been bridging the gap between Web3 and the real world with sales of its sparkling water since November 2024. In its first sale,…
Share
BitcoinEthereumNews2025/09/20 09:24