Why Nostr? What is Njump?
2024-06-19 13:00:00

Slashdot (RSS Feed) on Nostr: China's DeepSeek Coder Becomes First Open-Source Coding Model To Beat GPT-4 Turbo ...

China's DeepSeek Coder Becomes First Open-Source Coding Model To Beat GPT-4 Turbo

Shubham Sharma reports via VentureBeat: Chinese AI startup DeepSeek, which previously made headlines with a ChatGPT competitor trained on 2 trillion English and Chinese tokens, has announced the release of DeepSeek Coder V2, an open-source mixture of experts (MoE) code language model. Built upon DeepSeek-V2, an MoE model that debuted last month, DeepSeek Coder V2 excels at both coding and math tasks. It supports more than 300 programming languages and outperforms state-of-the-art closed-source models, including GPT-4 Turbo, Claude 3 Opus and Gemini 1.5 Pro. The company claims this is the first time an open model has achieved this feat, sitting way ahead of Llama 3-70B and other models in the category. It also notes that DeepSeek Coder V2 maintains comparable performance in terms of general reasoning and language capabilities.

Founded last year with a mission to "unravel the mystery of AGI with curiosity," DeepSeek has been a notable Chinese player in the AI race, joining the likes of Qwen, 01.AI and Baidu. In fact, within a year of its launch, the company has already open-sourced a bunch of models, including the DeepSeek Coder family. The original DeepSeek Coder, with up to 33 billion parameters, did decently on benchmarks with capabilities like project-level code completion and infilling, but only supported 86 programming languages and a context window of 16K. The new V2 offering builds on that work, expanding language support to 338 and context window to 128K -- enabling it to handle more complex and extensive coding tasks. When tested on MBPP+, HumanEval, and Aider benchmarks, designed to evaluate code generation, editing and problem-solving capabilities of LLMs, DeepSeek Coder V2 scored 76.2, 90.2, and 73.7, respectively -- sitting ahead of most closed and open-source models, including GPT-4 Turbo, Claude 3 Opus, Gemini 1.5 Pro, Codestral and Llama-3 70B. Similar performance was seen across benchmarks designed to assess the model's mathematical capabilities (MATH and GSM8K). The only model that managed to outperform DeepSeek's offering across multiple benchmarks was GPT-4o, which obtained marginally higher scores in HumanEval, LiveCode Bench, MATH and GSM8K. [...]

As of now, DeepSeek Coder V2 is being offered under a MIT license, which allows for both research and unrestricted commercial use. Users can download both 16B and 236B sizes in instruct and base avatars via Hugging Face. Alternatively, the company is also providing access to the models via API through its platform under a pay-as-you-go model. For those who want to test out the capabilities of the models first, the company is offering the option to interact. with Deepseek Coder V2 via chatbot.
<a href="http://twitter.com/home?status=China's+DeepSeek+Coder+Becomes+First+Open-Source+Coding+Model+To+Beat+GPT-4+Turbo%3A+https%3A%2F%2Fnews.slashdot.org%2Fstory%2F24%2F06%2F18%2F226232%2F%3Futm_source%3Dtwitter%26utm_medium%3Dtwitter"; rel="nofollow"><img src="https://a.fsdn.com/sd/twitter_icon_large.png"></a>;
<a href="http://www.facebook.com/sharer.php?u=https%3A%2F%2Fnews.slashdot.org%2Fstory%2F24%2F06%2F18%2F226232%2Fchinas-deepseek-coder-becomes-first-open-source-coding-model-to-beat-gpt-4-turbo%3Futm_source%3Dslashdot%26utm_medium%3Dfacebook"; rel="nofollow"><img src="https://a.fsdn.com/sd/facebook_icon_large.png"></a>;



https://news.slashdot.org/story/24/06/18/226232/chinas-deepseek-coder-becomes-first-open-source-coding-model-to-beat-gpt-4-turbo?utm_source=rss1.0moreanon&utm_medium=feed
at Slashdot.

https://news.slashdot.org/story/24/06/18/226232/chinas-deepseek-coder-becomes-first-open-source-coding-model-to-beat-gpt-4-turbo?utm_source=rss1.0mainlinkanon&utm_medium=feed
Author Public Key
npub1rk3j5fc4ew5w9zd7kx5zfqt04tew0kan9swrr63ctn6k8wp8f65qwd8w8z