>Many critical details regarding this scaling process were only disclosed with the recent release of DeepSeek V3
And so they decide to not disclose their own training information just after they told everyone how useful it was to get Deepseeks? Honestly can't say I care about "nearly as good as o1" when its a closed API with no additional info.
It's not even "nearly as good as o1". They only compared to the older 4o.
You can safely assume Qwen2.5-Max will score worse than all of the recent reasoning models (o1, DeepSeek-R1, Gemini 2.0 Flash Thinking).
It'll probably become a very strong model if/when they apply RL training for reasoning. However, all the successful recipes for this are closed source, so it may take some time. They could do SFT based on another model's reasoning chains in the meantime, though the DeepSeek-R1 technical report noted that it's not as good as RL training.
I thought there were three DeepSeek items on the HN front page, but this turned out to be a fourth one, because it's the Qwen team saying they have a secret version of Qwen that's actually better than DeepSeek-V3.
I don't remember the last time 20% of the HN front page was about the same thing. Then again, nobody remembers the last time a company's market cap fell by 569 billion dollars like NVIDIA did yesterday.
A Chinese company announcing this on Spring Festival eve, that is very surprising. The deep seek announcement must have put a fire under them. I am surprised anything is being done right now in these Chinese tech companies.
Well, DeepSeek engineers are (desperately) fire-fighting as they don't have nearly as much capacity as needed. Competitors either already rushed release or decided to do an hush release of whatever they had in the pipeline. Sounds like everyone is working L
Kinda ambivalent about MoE in cloud. Where it could really shine though is in desktop class gear. Memory is starting to get fast enough where we might see MoEs being not painfully slow soon for large-ish models.
The significance of _all_ of these releases at once is not lost on me. But the reason for it is lost on me. Is there some convention? Is this political? Business strategy?
My thoughts go out to the poor engineers who got put on call because someone scheduled a product release on the day before the biggest holiday of their year.
>Many critical details regarding this scaling process were only disclosed with the recent release of DeepSeek V3
And so they decide to not disclose their own training information just after they told everyone how useful it was to get Deepseeks? Honestly can't say I care about "nearly as good as o1" when its a closed API with no additional info.
It's not even "nearly as good as o1". They only compared to the older 4o.
You can safely assume Qwen2.5-Max will score worse than all of the recent reasoning models (o1, DeepSeek-R1, Gemini 2.0 Flash Thinking).
It'll probably become a very strong model if/when they apply RL training for reasoning. However, all the successful recipes for this are closed source, so it may take some time. They could do SFT based on another model's reasoning chains in the meantime, though the DeepSeek-R1 technical report noted that it's not as good as RL training.
I thought there were three DeepSeek items on the HN front page, but this turned out to be a fourth one, because it's the Qwen team saying they have a secret version of Qwen that's actually better than DeepSeek-V3.
I don't remember the last time 20% of the HN front page was about the same thing. Then again, nobody remembers the last time a company's market cap fell by 569 billion dollars like NVIDIA did yesterday.
Somehow I failed to notice that 4 ÷ 30 is not 20%. It's more like 13%. That was a dumb mistake.
it's a scaling law for stocks!
HuggingFace demo: https://huggingface.co/spaces/Qwen/Qwen2.5-Max-Demo
Source: https://x.com/Alibaba_Qwen/status/1884263157574820053
This appears to be Qwen's new best model, API only for the moment, which they say is better than DeepSeek v3.
It is available at https://chat.qwenlm.ai/, under the model selector.
A Chinese company announcing this on Spring Festival eve, that is very surprising. The deep seek announcement must have put a fire under them. I am surprised anything is being done right now in these Chinese tech companies.
Well, DeepSeek engineers are (desperately) fire-fighting as they don't have nearly as much capacity as needed. Competitors either already rushed release or decided to do an hush release of whatever they had in the pipeline. Sounds like everyone is working L
They are being attacked as well.
https://apnews.com/article/deepseek-ai-artificial-intelligen...
It's like when Gemini topped Chatbot Arena Leaderboard, and OpenAI released a model next day.
is gemini really better than e.g. claude 3.5?
Mostly, but not for coding. Also the arena is pretty much a vibes benchmark. Don't take it too seriously. Livebench is a better indicator
Gemini is actually pretty useless because of its rate limits.
This is not the reasoning model. If they beat Deepseek V3 in benchmarks I think a 'reasoning' model would beat o1 Pro
I just ran my NYT Connections benchmark on it: 18.6, up from 14.8 for Qwen 2.5 72B. I'll run my other benchmarks later.
https://github.com/lechmazur/nyt-connections/
Kinda ambivalent about MoE in cloud. Where it could really shine though is in desktop class gear. Memory is starting to get fast enough where we might see MoEs being not painfully slow soon for large-ish models.
No weights, no proof.
Would you say the same for OpenAI releasing new models?
Not everything has to be a gotcha moment about Americans
I may be misremembering, but I think he has.
The significance of _all_ of these releases at once is not lost on me. But the reason for it is lost on me. Is there some convention? Is this political? Business strategy?
Alibaba probably doesn't want DeepSeek to get all the fame.
Sometimes a cigar is just a cigar
Today is the last day before the Chinese New Year.
My thoughts go out to the poor engineers who got put on call because someone scheduled a product release on the day before the biggest holiday of their year.
Party goes on
Now they need to finetune it like R1 and o1 and it will be competitive with SOTA models.
> We evaluate Qwen2.5-Max alongside leading models
> [...] we are unable to access the proprietary models such as GPT-4o and Claude-3.5-Sonnet. Therefore, we evaluate Qwen2.5-Max against DeepSeek V3
"We'll compare our proprietary model to other proprietary models. Except when we don't. Then we'll compare to non-proprietary models."