In this landscape, there is a noisy mix of serious technologists and some very unserious hustlers. How do you know who to listen to?
Here are some credible voices that I feel are being honest about things they don’t know, but they also happen to know quite a lot and are worth subscribing to their newsletters or just following them on socials.
I track how technology impacts people, teams and organizational innovation and advise organizations.
Here are some of the resources I bring to designing human-centered AI-powered enterprises.
<aside> 📏
Some of the better LLM ‘leaderboard trackers’ that measure key stats across all the currently available open and closed systems.
<aside> 🎓
Center for AI Safety - CAIS (pronounced ‘case’)
Based in San Francisco, probably the most robut
</aside>
<aside> 🎓 Anyone working at CSAIL at MIT https://ei.csail.mit.edu/people.html
</aside>
<aside> 🎓 LMSYS.org https://lmsys.org/about/
</aside>
<aside> 🎓 Dr. Fei-Fei Li, pioneer in AI field, now founder/director of World Labs https://profiles.stanford.edu/fei-fei-li World Labs is working on what comes next after LLMs, which they refer to as “Spatial Intelligence”, which are AI models that natively represent the 3d world in time.
</aside>
<aside> 👉🏼
The U.S. National Institute of Standards has published several important documents in regards to this AI Moment.
A glossary of terms of responsible and trustworthy AI. https://docs.google.com/spreadsheets/d/e/2PACX-1vTRBYglcOtgaMrdF11aFxfEY3EmB31zslYI4q2_7ZZ8z_1lKm7OHtF0t4xIsckuogNZ3hRZAaDQuv_K/pubhtml
The Risk Management Framework (RMF) https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF
</aside>
<aside> 🍎 Alex Sarlin https://edtechinsiders.substack.com/
</aside>
<aside> 🤵🏼♂️ Ethan Mollick, author of “Co-Intelligence”, https://substack.com/home/post/p-152600543
Also, his Prompt Library, https://www.moreusefulthings.com/prompts
</aside>