Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Open to Collab
329
47
3
Furkan Gözükara
PRO
MonsterMMORPG
Follow
amoslll1's profile picture
Eros-rama's profile picture
Rubick784's profile picture
1017 followers
·
12 following
https://www.youtube.com/@SECourses
gozukarafurkan
FurkanGozukara
furkangozukara
AI & ML interests
Check out my youtube page SECourses for Stable Diffusion tutorials. They will help you tremendously in every topic
Recent Activity
replied
to
their
post
about 16 hours ago
Compared Quality and Speed Difference (with CUDA 13 & Sage Attention) of BF16 vs GGUF Q8 vs FP8 Scaled vs NVFP4 for Z Image Turbo, FLUX Dev, FLUX SRPO, FLUX Kontext, FLUX 2 - Full 4K step by step tutorial also published Full 4K tutorial : https://youtu.be/XDzspWgnzxI Check above full 4K tutorial to learn more and see uncompressed original quality and size images It was always wondered how much quality and speed difference exists between BF16, GGUF, FP8 Scaled and NVFP4 precisions. In this tutorial I have compared all these precision and quantization variants for both speed and quality. The results are pretty surprising. Moreover, we have developed and published NVFP4 model quant generator app and FP8 Scaled quant generator apps. The links of the apps are below if you want to use them. Furthermore, upgrading ComfyUI to CUDA 13 with properly compiled libraries is now very much recommended. We have observed some noticeable performance gains with CUDA 13. So for both SwarmUI and ComfyUI solo users, CUDA 13 ComfyUI is now recommended.
replied
to
their
post
1 day ago
Compared Quality and Speed Difference (with CUDA 13 & Sage Attention) of BF16 vs GGUF Q8 vs FP8 Scaled vs NVFP4 for Z Image Turbo, FLUX Dev, FLUX SRPO, FLUX Kontext, FLUX 2 - Full 4K step by step tutorial also published Full 4K tutorial : https://youtu.be/XDzspWgnzxI Check above full 4K tutorial to learn more and see uncompressed original quality and size images It was always wondered how much quality and speed difference exists between BF16, GGUF, FP8 Scaled and NVFP4 precisions. In this tutorial I have compared all these precision and quantization variants for both speed and quality. The results are pretty surprising. Moreover, we have developed and published NVFP4 model quant generator app and FP8 Scaled quant generator apps. The links of the apps are below if you want to use them. Furthermore, upgrading ComfyUI to CUDA 13 with properly compiled libraries is now very much recommended. We have observed some noticeable performance gains with CUDA 13. So for both SwarmUI and ComfyUI solo users, CUDA 13 ComfyUI is now recommended.
replied
to
their
post
2 days ago
Compared Quality and Speed Difference (with CUDA 13 & Sage Attention) of BF16 vs GGUF Q8 vs FP8 Scaled vs NVFP4 for Z Image Turbo, FLUX Dev, FLUX SRPO, FLUX Kontext, FLUX 2 - Full 4K step by step tutorial also published Full 4K tutorial : https://youtu.be/XDzspWgnzxI Check above full 4K tutorial to learn more and see uncompressed original quality and size images It was always wondered how much quality and speed difference exists between BF16, GGUF, FP8 Scaled and NVFP4 precisions. In this tutorial I have compared all these precision and quantization variants for both speed and quality. The results are pretty surprising. Moreover, we have developed and published NVFP4 model quant generator app and FP8 Scaled quant generator apps. The links of the apps are below if you want to use them. Furthermore, upgrading ComfyUI to CUDA 13 with properly compiled libraries is now very much recommended. We have observed some noticeable performance gains with CUDA 13. So for both SwarmUI and ComfyUI solo users, CUDA 13 ComfyUI is now recommended.
View all activity
Organizations
MonsterMMORPG
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
liked
a Space
3 months ago
Running
1
Top Contributors To Follow
🔔
1
Meet the most impactful users on Hugging Face
liked
a model
almost 3 years ago
ai-forever/Kandinsky_2.1
Updated
Apr 5, 2023
•
187
liked
a model
about 3 years ago
sshleifer/distilbart-cnn-12-6
Summarization
•
Updated
Jun 14, 2021
•
542k
•
•
306