There are so many myths and misconceptions around AI models. What they can do, how they are trained, even the fundamentals of how the data for them is even collected and formatted to make sense. The idea that we can "train our own model" by just feeding it a ton of conversation or notes is unrealistic, but why exactly is that?
Today we get into a 2 Part read which will be one of the most value dense we have covered on the show yet. It breaks down the entirety of the process of identifying the core value, and then sourcing, qualifying, and prepping the data, and then training, fine tuning, adjusting, and testing an AI model built from it, from beginning to end.
The Spirit of Satoshi project is an incredible open source endeavor and the team reveals tons of great details about the complexities and challenges of building an LLM, as well as the incredible work they are doing in building novel tools for crowdsourcing the hardest part of the process, and of course, how Bitcoin and Lightning enable better tools to make this all possible.
Check out the original articles at Spirit of Satoshi (Link: http://tinyurl.com/4jsvmz3z) & Satoshi GPT (Link: http://tinyurl.com/msyr4m5t)
Host Links
Check out our awesome sponsors!
“Revolutions bring disruptions and disruptions bring opportunities.”
~ Nicky Verd
Open AI signals that they are the surveillance leviathon we suspected they always were. Anthropic's Claude 3.5 may be even better than ChatGPT. I...
Is the pursuit of artificial general intelligence a pipe dream? Are the economics of building massive AI models unsustainable? And what if the future...
Welcome to the first episode of AI Unchained, a new show exploring how AI technology can be a means of empowering the individual, rather...