Are Uncensored LLMs Bad For Us?
Once you’ve delved into the world of LLMs beyond OpenAI’s ChatGPT or Anthropic’s Claude, you’ll be astounded by the vast array of open-source LLMs at your disposal. Hugging Face, the Github of AI models, alone hosts over 400k models, 100k datasets, and 150k applications, all free for personal and commercial use. The number of options is staggering — the sheer volume!
You can use LLM models, which are transparent on the datasets they use to train their models, provide insights into how well each model performs against shared benchmarks, and don’t record or take your data because you choose how and where to host the models. Is this too good to believe? Are any of these models better than the big proprietary ones? Wait, there are also uncensored models? My big brain is aching from trying to wrap my mind around the plethora of options, and the voice in my head is screaming, “You idiot! We should have taken the blue pill and not discovered all this.” Should we give our data and privacy away for the good of humanity and convenience? Are we adequately considering the consequences of allowing big tech and the government to dig more talons into us to relent to their will with our incremental concessions to gradually forfeit our rights for technology advancements?