My friend David Eeves has the best catchphrase on his blog. “If writing is muscle, this is my gym.” So I asked him if he could adapt it to a new (and sometimes weekly) 1 hour video show at Oreilly.com. Living with Tim O'Reilly. In it, I interview people who know more than I do and ask them to tell me what they know. It's a mental training that I can ask questions as time goes on, not just for me, but for our participants. Learning is muscle. Living with Tim O'Reilly My gym and my guest is my personal trainer. This is how I learned throughout my career. Having exploratory conversations with people is a big part of my everyday work – but on this show, I do it publicly.
My first guest on June 3rd was Steve Wilson, author of my favorite recent O'Reilly book. Developer Playbook for Large-scale Language Model Security. Steve's Day job is cybersecurity company Exabeam, who is Chief AI and Product Officer. He also founded and CoChairs the Open Worldwide Application Security Project (OWASP) Foundation's Gen AI Security Project.
While preparing with Steve, I was immediately reminded of the wonderful book passages of Alain de Boton. How Proust can change your lifereaffirm Proust as a self-help author. Proust lies on his sick bed. Proust continues to let him go back to the story by saying “slower” until his friend shares every detail about his trip, and watching him feed the pigeons on the station stairs.
Why am I telling you this? Steve said something about AI security that I understood in a superficial way, but didn't really understand. So I laughed and told Steve about Proust, and whenever he went through something too quickly for me, I said “slower” and he knew what I mean.
This captures what we want to create a part of the essence of this show. There are many podcasts and interview shows, which stay at a high conceptual level. in Living with Tim O'Reillymy goal is to go a little more slowly to really smart people. It's about telling a vivid story and explaining what they mean in a way that helps us all get a little deeper by telling a vivid story and offering quick and useful takeaways.
This seems particularly important in the age of AI-enabled coding. This allows us to build on an unstable foundation, as we can do it very quickly. thought We understood. As my friend Andrew Singer taught me 40 years ago, “The skill of debugging is not what you thought you said it, but to figure out what you really did to your program.” That's even more true today in the world of AI Evals.
“Slower” is a reminder to people all the time when a personal trainer runs through the person in charge. Increased time under tension is a proven way to build muscle. So I'm not mixing my parphor completely here. 😉
In an interview with Steve, I started by asking him to tell us about some of the biggest security issues developers face when coding with AI, especially when it comes to Vibe coding. Steve threw away that attention to API keys is at the top of the list. I said “more slowly” and here's what he said to me:
https://www.youtube.com/watch?v=auaernwv7fw
As you can see, by unlocking the meaning of “be careful,” he ended up on a Prouchant Tour from a bot that accidentally exposed keys kithub exposed to the code repository for keys, to details of the risks and mistakes underlying that short advice (or from the current repository, there was a story about his coder spore of his coder spore of his coder fore hose coder fore his coder. Account – after viewing his keys in a live coding session on Twitch. As Steve cried, “They are secrets. They are meant to be secrets!”
Steve also issued an eye-opening warning about the security risks of hallucination packages (I imagine “the package doesn't exist and it's not a big deal,” but I found that malicious programmers generally understood hallucination package names and matched compromised packages!) some spicy observations on the relative security advantages and disadvantages of various major AI players. Also, why running AI models locally in your data center is not safer unless you do it correctly. He also spoke a bit about his role as Chief AI and Product Head of Information Security Company Exabeam. You can see the full conversation here.
My second guest, Chelsea Troy, who I spoke to on June 18th, is essentially perfectly in line with the idea of ”slower than that.” In fact, her “not too fast” may have incorporated some much-deceased computer science papers in the recent O'Reilly AI Codecon. In our conversation, all three key skills needed for software engineers working with AI, why best practices aren't necessarily a good reason to do something, and software developers who need to understand about LLM under the hood are all pure gold. You can see our full story here.
One of the things I did in this second interview a little differently was using the live training feature of the O'Reilly Learning Platform to bring in audience questions early in the conversation and mix them with my own interview rather than leaving it to the end. It really worked. Chelsea himself spoke about his experience teaching on the O'Reilly platform and how much he learned from the questions he attended. I totally agree.
https://www.youtube.com/watch?v=srxf4zoqknm
Additional guests include Matthew Prince of CloudFlare (July 14th). It includes developing Cloudflare's incredibly broad role in Cloudflare's infrastructure, and what the fears of AI leading to web death know and what content developers can do about it (subscribe here). Author of Marilinica (July 28th), Building AI-equipped productswill teach you about AI product management (register here). Arvind Narayanan (August 12th), co-author of the book AI snake oilspeaking to us about his paper, “AI as a normal technology,” and what it means for the employment outlook for AI in the future.
The full schedule will be released soon. It's a bit lighter during the summer, but there's a chance that you'll be slotted in more sessions depending on breaking the topic.
