#Nesso-4B is a fine-tuned version of Qwen-4B, trained on a highly curated and balanced dataset designed specifically for multilingual agentic workflows and conversational use cases.
As shown in the video below we simulate, the new “cowork” from #Antrophic, without any data sharing all running on a consumer device. The model can be used to build agentic behavior in #privateAI environments.
Not every problem requires super intelligence: in many cases, intelligence at the edge is more than enough.
LLAMA4 release highlight the importance of political and social bias. According to their own evaluation described in the release blog post: - Refusals on contentious prompts dropped from 7% (hashtag#LLAMA 3.3) to under 2% - Unequal response refusals are now under 1% - Political lean bias is said to be halved compared to hashtag#LLaMA 3.3 and comparable to Grok
In the chart below, we evaluated multiple leading models on the basis of ratings across a range of prompts designed to expose ideological leanings.
Despite Meta’s stated neutrality goals, LLAMA4 ranks at the very top in terms of total ratings aligned with a clear ideological bias. The models were tested on their ability to respond even-handedly to politically sensitive prompts. LLaMA 4 scored even higher than models known for strong alignment policies like GPT-4o.
LLMs may be refusing less, but they still show bias through content framing. This suggests that refusal rates alone are not a sufficient measure of ideological bias. Relying solely on internal evaluations from AI labs also raises concerns about transparency and objectivity.
At this very moment, as shown in the screenshot, mii-llm/maestrale-chat-v0.4-beta is ranked 8th right between ChatGPT-4.5 and ChatGPT-4o.
It's likely that for several months, the best Italian speaking LLM has been an open source 7B model created by open source contributors and hardly anyone knew it.