Markus PRO
AI & ML interests
Everything.
Recent Activity
liked
a model about 8 hours ago
mradermacher/MiniMax-M2.5-REAP-172B-A10B-i1-GGUF liked
a model about 16 hours ago
Qwen/Qwen3.5-35B-A3B liked
a model about 20 hours ago
unsloth/Qwen3.5-35B-A3B-Experiments-GGUF Organizations
replied to their post 4 days ago
Post
1206
๐ค Many cultures penalize or look down upon self-celebratory behavior. One such example is liking your own post. So why do i do it? Two reasons:
1. I disagree that self-celebratory behavior is inherently bad.
2. On the Huggingface hub, if your post has 0 reactions, it takes TWO whole clicks to react instead of one. So it is actually a UI hack that lowers the bar to engage.
So if you see me reacting to to my own post and thing 'Ugh, this guy is so full of himself' you are only half correct ๐
Now behold as I perform this magic trick called "Exhausting all reaction options for increased visual engagement" so you don't have to click twice to react. You're welcome!
Follow this aspiring ๐ค HF Hub influencer for more half-serious bloat in your feed ๐
1. I disagree that self-celebratory behavior is inherently bad.
2. On the Huggingface hub, if your post has 0 reactions, it takes TWO whole clicks to react instead of one. So it is actually a UI hack that lowers the bar to engage.
So if you see me reacting to to my own post and thing 'Ugh, this guy is so full of himself' you are only half correct ๐
Now behold as I perform this magic trick called "Exhausting all reaction options for increased visual engagement" so you don't have to click twice to react. You're welcome!
Follow this aspiring ๐ค HF Hub influencer for more half-serious bloat in your feed ๐
posted an
update 4 days ago
Post
1206
๐ค Many cultures penalize or look down upon self-celebratory behavior. One such example is liking your own post. So why do i do it? Two reasons:
1. I disagree that self-celebratory behavior is inherently bad.
2. On the Huggingface hub, if your post has 0 reactions, it takes TWO whole clicks to react instead of one. So it is actually a UI hack that lowers the bar to engage.
So if you see me reacting to to my own post and thing 'Ugh, this guy is so full of himself' you are only half correct ๐
Now behold as I perform this magic trick called "Exhausting all reaction options for increased visual engagement" so you don't have to click twice to react. You're welcome!
Follow this aspiring ๐ค HF Hub influencer for more half-serious bloat in your feed ๐
1. I disagree that self-celebratory behavior is inherently bad.
2. On the Huggingface hub, if your post has 0 reactions, it takes TWO whole clicks to react instead of one. So it is actually a UI hack that lowers the bar to engage.
So if you see me reacting to to my own post and thing 'Ugh, this guy is so full of himself' you are only half correct ๐
Now behold as I perform this magic trick called "Exhausting all reaction options for increased visual engagement" so you don't have to click twice to react. You're welcome!
Follow this aspiring ๐ค HF Hub influencer for more half-serious bloat in your feed ๐
replied to their post 6 days ago
posted an
update 6 days ago
Post
1595
# The most underrated feature of Qwen3-TTS: Voice embeddings! ๐งโ๐ฆฐ๐ฌ
https://huggingface.co/collections/marksverdhei/qwen3-voice-embedding
Did you know that Qwen3 TTS actually utilizes voice embedding?
Your voice is turned into a vector of 1024 (or 2048) dimensions,
and based on this vector alone you can get your custom voice.
But the coolest part is that this means that you can use math to modify voices, average voices. You can swap gender, pitch, mix and match vocies, and even create an emotion space! This also enables semantic voice search!
The voice embedding model is actually just a tiny encoder with just a few million parameters. I've ripped it out of the voice embeding model so you can use the embedding model standalone. Check out my collection! :D
https://huggingface.co/collections/marksverdhei/qwen3-voice-embedding
Did you know that Qwen3 TTS actually utilizes voice embedding?
Your voice is turned into a vector of 1024 (or 2048) dimensions,
and based on this vector alone you can get your custom voice.
But the coolest part is that this means that you can use math to modify voices, average voices. You can swap gender, pitch, mix and match vocies, and even create an emotion space! This also enables semantic voice search!
The voice embedding model is actually just a tiny encoder with just a few million parameters. I've ripped it out of the voice embeding model so you can use the embedding model standalone. Check out my collection! :D
Post
4570
Poll: Will 2026 be the year of subquadratic attention?
The transformer architecture is cursed by its computational complexity.
It is why you run out of tokens and have to compact. But some would argue that this is a feature not a bug and that this is also why these models are so good. We've been doing a lot of research on trying to make equally good models that are computationally cheaper, But so far, none of the approaches have stood the test of time. Or so it seems.
Please vote, don't be shy. Remember that the Dunning-Kruger effect is very real, so the person who knows less about transformers than you is going to vote. We want everyone's opinion, no matter confidence.
๐ if you think at least one frontier model* will have no O(n^2) attention by the end of 2026
๐ฅ If you disagree
* Frontier models - models that match / outperform the flagship claude, gemini or chatgpt at the time on multiple popular benchmarks
The transformer architecture is cursed by its computational complexity.
It is why you run out of tokens and have to compact. But some would argue that this is a feature not a bug and that this is also why these models are so good. We've been doing a lot of research on trying to make equally good models that are computationally cheaper, But so far, none of the approaches have stood the test of time. Or so it seems.
Please vote, don't be shy. Remember that the Dunning-Kruger effect is very real, so the person who knows less about transformers than you is going to vote. We want everyone's opinion, no matter confidence.
๐ if you think at least one frontier model* will have no O(n^2) attention by the end of 2026
๐ฅ If you disagree
* Frontier models - models that match / outperform the flagship claude, gemini or chatgpt at the time on multiple popular benchmarks
replied to their post 20 days ago
Aren't we doing both already? There's so much progress being done in compute optimization alone afaik

