Ai is a powerful tool, and thus, necessarily comes with incredible risk. It could obsolete millions of workers, taking jobs and productive capacity away from humans and consolidate it into corporations. It could make the average individual too dangerous, putting incredible power into the hands of bad people. It may even present an existential threat to humanity, a veritable nuclear bomb, made available to everyone in the world...
What could all of this mean? Doesn't it suggests we need to tightly control, license, and scrutinize access to these base tools? Or is there an underlying philosophy exposed by these fears, and a more prudent, and honest course of action? find out in today's Ai Unchained episode!
Check out the link to the video of Yann LeCun that was shared toward the end of this episode, Why Open Source Ai is Our Only Hope (Link: http://tinyurl.com/yx5tmmnv)
Guest Links
Host Links
Check out our awesome sponsors!
"These contributions were pivotal in the image generation space, setting Stable Diffusion on a different path from Dall-E. Having an open model led to ...
Is the pursuit of artificial general intelligence a pipe dream? Are the economics of building massive AI models unsustainable? And what if the future...
"Do not expect the vertiginous pace of progress to abate. The trend lines look innocent, but their implications are intense... Scaling up simple deep...