Lauren: Okay, okay, okay. Welcome back to Tech Insider Weekly.
Derek: Hey everyone, good to have you here. New week, new version of the same tech chaos. I've watched this cycle repeat enough times to know the pattern by heart.
Lauren: Oh man, today is stacked. We're starting with AI in healthcare, where a friendly caregiver app suddenly has to meet the FDA like it just got called to the principal's office. I've watched this collision happen. Founders build something clever, then realize regulatory Regulatory wasn't an option.
Derek: Right, right, right. One minute it's checking symptoms, next minute it's writing treatment plans and the legal team is having a very bad day. FDA, liability, the whole gnarly mess.
Lauren: Exactly. And under that drama is the ops question. Tiny startup, high stakes medical calls, real liability. I've sat through enough board conversations about liability attribution to know this one keeps CFOs up at night. Night.
Derek: Then we zoom out to the actual money pit behind the magic trick, the hardware, GPUs, inference bottlenecks, margins that evaporate the second your cloud bill arrives. I've seen this story before.
Lauren: Plus the fragile moat thing, where your big secret edge is basically a short-term discount from Nvidia's favorite reseller. We'll come back to that GPU economics mess. I've pattern matched on how that story ends.
Derek: And then... Even agents, desktop-driving AI coders, Cursor, Astral, Claude, all promising an AI software engineer on your laptop, or at least that's the dream they're selling.
Lauren: But how do you let that thing near prod when you're not sure the startup even did its security homework? I've been both the operator asking that question and the investor watching founders skip it.
Derek: Which tees up our finale, the Dell fake compliance scandal, SOC 2 cosplay, and how hype around tools like Like Harvey can make buyers forget to ask basic questions. I've watched this pattern collapse before when speed becomes an excuse.
Lauren: Yeah, and we'll give you super simple checks so you can spot flimsy AI vendors without becoming a full-time security analyst. Trust me, the honest teams will light up when you ask the right questions.
Derek: All right, enough foreplay.
Lauren: Let's get into it. Segment one, AI and healthcare startups begins right now. Okay, so a founder hacks together an AI caregiving system for aging parents. Meds, early warning signs, weekend project becomes a funded startup overnight.
Derek: Classic pattern. You solve a family problem, suddenly every caregiver wants access. I've seen this at startups a thousand times. What works in one kitchen doesn't scale to a thousand clinics without someone asking hard questions first.
Lauren: Right. At first it's duct tape, off-the-shelf LLM, cheap tablet, prompts so it talks like Mom's friendly nurse. But then you hit the wall where this isn't just reminders, it's health advice. I've sat in enough boardrooms to know that's when the legal team starts. Team Stop Smiling.
Derek: The second it suggests don't go to the hospital, you've crossed into medical decisions, not a calendar app.
Lauren: Caregivers say anything is better than my dad forgetting his meds. Investors ask, are you a cute app or a clinical device the FDA will care about? I've watched those conversations happen in the same room, and they are not talking about the same product.
Derek: One path is ship fast, fix live. The other is build like infrastructure. Where the boring stuff, the logging and guardrails, actually matters because failures don't just cascade, they compound with liability.
Lauren: Caregivers are drowning now, but I've learned the more these tools slip from calendar plus chat to diagnosis plus dosing, the more they look like medical devices. I've pattern matched this escalation at portfolio companies. Everything changes once you cross that line.
Derek: Yeah, systems that start as note-taking buddies, then scope creep hits. We'll summarize, we'll do billing codes, suddenly we'll suggest diagnoses, prioritize which patient to see first. You're not an app anymore.
Lauren: So when people say we're meeting the FDA, what does that look like?
Derek: Engineers have this realization: Oh, this isn't a Chrome extension. It becomes data collection, prospective studies, documenting every model change. Hospitals ask, show me your AI is at least as good as standard care. And that's the moment founders realize they're not shipping software anymore.
Lauren: That evidence is slow and expensive. Founders think it's a scaling problem. It's not. It's validation. I've seen teams burn runway thinking regulatory is just... It's just another operational challenge you hire for. Interesting.
Speaker 3: I talked to an engineer whose team shipped an AI scribe into a clinic—just note taking. They still had to integrate with creaky EHRs, handle edge cases like "patient has three middle names," prove the system never mangles medication lists. That's non-negotiable. That's also the unsexy stuff that makes the difference between worked great in trials and worked great in prod.
Lauren: Which is the one thing you really don't want mangled. I've watched what happens when an AI quietly creates liability nobody saw coming. quietly creates liability nobody saw coming.
Speaker 3: Exactly. Once they touch triage, this chest pain goes urgent, that waits, the bar doesn't just scale, it transforms. Now they're logging every suggestion, override, outcome. Helpful tool becomes you could quietly increase risk and nobody notices until it's too late.
Lauren: This is where Ops freaks me out. When an AI suggestion harms a patient, who's on the hook? The startup, hospital, the doctor who clicked approve because they're drowning? I've been in enough board conversations around attribution to know that ambiguity is existential.
Derek: Slowly, everyone is a little on the hook and nobody feels safe.
Lauren: Small teams need clinical validation, red teaming, guardrails. But move at regulator speed, you die as a startup. Move at hacker speed, someone gets hurt. I've watched founders try to thread that needle, and it's brutal.
Speaker 3: They try hybrid moves, shadow mode, where AI suggests things but humans ignore them, just gather data. Hard constraints, like we never change medication doses automatically. Those guardrails protect everyone. It's boring infrastructure work, but it's the work that actually matters.
Lauren: and tons of logging, the unglamorous stuff that actually protects you when something goes wrong.
Speaker 3: So much logging, plus permissioning, so AI can't see notes it shouldn't touch. Boring infrastructure, unsexy permissioning, but it protects everyone. Doctors assume it's safe, startup cuts corners on the boring stuff, the clinician faces the lawsuit.
Lauren: Meanwhile the caregiver founder thinks, I wanted mom to take pills on time. How did I end up in regulatory hell? I feel for them. I've been. Then the operator brought in to clean up that exact mess.
Derek: Intent is beautiful; reduce burnout, give families a break; but software that talks like a nurse gets treated exactly like a nurse, with all the liability and all the expectations that come with it.
Lauren: So you have caregiver strain, clinical risk, regulation, and a third piece founders don't see coming—operations, who literally pays for twenty four seven AI nurse? First computation in every living room and clinic. I watch this blind spot constantly.
Derek: That's the unit economics question nobody's asking yet. And I have a hunch the math breaks when you add up 24-7 inference cost, compliance overhead, and clinical validation. That's where startups hit the wall.
Lauren: Right.
Derek: Exactly.
Lauren: Shifting gears for a second, that AI caregiver we just talked about? Under the hood, it's basically a walking cloud bill. I've watched founders' eyes glaze over when they realize the unit economics.
Derek: Oh man, yes. As an engineer, here's what actually terrifies me. It's not the training, it's shipping, users loving it, and then watching your infrastructure costs climb faster than you can raise pricing. I've seen this exact trap close on good teams.
Lauren: Right. Training is that one big bonfire. Inference is like leaving the stove on in a thousand apartments all day, every day. And as an operator, that's where I watch the unit economics story collapse.
Derek: Exactly. Every user query is a forward pass through this enormous model – you stack token limits, context windows, maybe tools – and suddenly each call is shockingly expensive compared with old school SaaS.
Lauren: And then founders realize, wait. My gross margins might never look like normal software. I've seen that moment in a boardroom. It's brutal.
Derek: Because behind that chat box is a rack of GPUs that want power, cooling, and premium cloud pricing. If your product is always-on assistant, you are basically renting a Ferrari to do food delivery.
Lauren: Painfully accurate. So talk through the inference bottleneck as an engineer. What does it feel like week to week?
Derek: In code, it starts simple. You hit an API, add logging, then traffic doubles, latency spikes. You start batching requests, rewriting prompts, caching responses, but you keep hitting the ceiling. You are bound by how many GPUs you can get and how tightly you can pack work onto them. I've watched this progression a hundred times. It never gets easier. easier
Lauren: So instead of can we build it, the question becomes can we afford for people to use it. Builders hate it.
Derek: Exactly, which is where new players show up. Someone builds the infrastructure layer, abstracts away the pain, and suddenly everyone plugs in. Gimlet Labs, Andromeda, whoever. Their pitches will give you cheaper, more flexible access to GPUs.
Lauren: Yeah, they're saying you don't have to be OpenAI to get decent economics. Usage-based, multi-cloud. Out spot instances all the messy optimization abstracted away. What they're really selling is we'll handle the infrastructure pain, but the pain just moves. It doesn't vanish.
Derek: Under the hood, they're doing gnarly things, packing multiple small models on the same GPU, routing workloads between data centers, swapping in cheaper hardware with latency allows. As an engineer, that is catnip.
Lauren: As an operator, I hear cheaper inference and my brain goes straight to margins. Cool your COGS come down; but now your whole advantage might be, we have a sweet deal on GPUs through Startup X. That's fragile; I've watched that story end.
Derek: Yeah, once every one can rent from Gimlet or Andromeda, your special sauce cannot be 'we got there first.' It becomes middleware, another line item on your bill, negotiable by quarter. That's the moat evaporation story playing out in real time.
Lauren: And that's my question, if your edge is just Just ask Seth, how long before your moat evaporates? You're not a product, you're a reseller with good marketing. I've watched this pattern play out enough times to know it compounds against you.
Derek: I'm half with you. There is real infrastructure there, scheduling, autoscaling, fault tolerance. But I agree, over time it looks like cloud 2.0. Margins compress, customers negotiate, and you get compared line by line. I watched this cycle play out before. Competitive window collapses, everyone becomes a commodity.
Lauren: And we're already seeing young companies think like infrastructure players, Upstage grabbing thousands of AMD chips-small teams saying, if we don't own part of the stack, we're at the mercy of whoever does. I used to think that was founder paranoia; now I know better.
Derek: Which is wild for a seed stage founder. Now you're basically doing supply chain strategy before you've proven product market fit.
Lauren: And burning a ton of capital early-that's the other risk-these are insanely capital intensive bets for companies that have not proven product market fit. I've sat in boardrooms where that realization was the quiet kill shot.
Derek: So someone listening who wants to start an AI company, how should they think about this?
Lauren: Firmly, I'd draw a bright line: if your value prop is basically "we're cheaper GPUs" Assume that advantage decays fast; you'd better layer in something sticky—workflow, data network effects, proprietary models tuned on real usage. I've seen what happens when founders skip this step.
Derek: And if you're building an application, be deeply suspicious of locking your whole business to one vendor discount; the margin is great until that partner realizes they're also your competitor. I've seen founders learn that lesson the expensive way.
Lauren: Or until your customers realize they can talk directly to the same GPU cloud and cut you out. That's the "thanks for the intro" problem. I've watched operators do it.
Derek: The "thanks for the intro, we'll take it from here" problem.
Lauren: Exactly. The founders I like in this space treat cheap inference as an ingredient, not the main dish. They use it as a lever to think bigger. I've pattern matched on the ones who get this distinction versus the ones who don't. Don't—very different outcomes.
Derek: And that connects to where we're headed next. Once you lower the cost of each inference call, the economics change. You stop thinking chatbot and start thinking agent that can actually do real work. It's that unsexy infrastructure moment enabling the flashy possibilities.
Lauren: Right. Give that same compute to something that can click buttons and write code and now you're flirting with this AI software engineer idea. Yeah, that's where things get interesting and messier.
Derek: And I've seen what happens when you give a coding agent a big shiny GPU credit card and no guardrails: it is chaotic. Very fun. Also occasionally expensive, and sometimes both at the same time.
Lauren: Okay, Let's jump into what an AI engineer really is, and why every dev tool startup is suddenly promising one. Okay, okay, okay. Picture this. You go to bed and an AI agent quietly fans out across your code base. It refactors. right to test, opens pull requests, even comments in your team's snarky style. even comments in your team's snarky style. It's the automation dream we've been selling for 15 years, just with a new paint job. Oh man, so you wake up and your app is faster, cleaner, and slightly more passive-aggressive. I watched enough Sprint retros to know teams are already treating that as a feature, not a bug.
Derek: Exactly. That fantasy is what every startup is selling right now. now and what Cursor or getting that valuation is hinting at but here's the thing I've watched over and over the real question is whether that valuation survives the moment everyone can do it these
Lauren: Right. And this is a level up from today's Copilots. Copilot, Cursor, they sit in your editor and autocomplete. They help, but they wait for you. I've pattern matched enough on founders to know the difference between a tool and an agent, and most teams haven't.
Derek: new agents act not just suggest they plan they click Click, they run commands, you give a goal, like "Kill this flaky test", and it goes hunting.
Lauren: So how does that compare to what OpenAI and Anthropic are rolling out?
Derek: OpenAI is snapping up Astral, which handles agents navigating dev tools. Anthropic just shipped Claude with computer control. And when both labs move it like that, it's a signal the capability is finally baked. The race is heating up.
Lauren: Like actually move the mouth and type?
Derek: Yep, remote control with a brain. Amazing for speed, absolutely terrifying for everything else.
Lauren: That is both incredible and mildly terrifying. I've watched what happens when founders get powerful tools without guardrails. The confidence outpaces judgment.
Derek: Mm-hmm. On the plus side, you get a tireless junior dev who never sleeps. On the scary side, you just handed a stochastic parrot the keys to production, which I've watched make very confident... But in very wrong decisions at big tech.
Lauren: Okay, so that's exactly where my operator brain kicks in. If the core trick is we drive the mouse and keyboard, how do you build a company when every big LLM can copy that in three months? I've watched this commoditization play out before. You're not building defensibility. You're building a feature that gets cloned.
Derek: Instinctively, you don't sell clicks, you sell workflow. The dream is your agent learns your stack, your tickets, your staging. Teaching quirks, it becomes the glue only you understand. But here's the thing most founders miss.
Lauren: I buy that story, but I'm not sure most tools get that far. A lot feel like fancy macros. I've sat through enough pitch meetings to know most founders don't realize they're selling the wrong thing until their Series A conversations start falling apart.
Derek: Exactly. It's fancy wrapping around UI automation. Most teams never get past scripts that happen to ship. Chip.
Lauren: And then there's the buyer side. Say I'm VP Eng at a bank. Some startup pitches AI software engineer can ship code to prod. My risk alarms are screaming, and I've sat through enough security reviews to know those alarms are exactly right. I've been both sides of that table.
Derek: Absolutely not. Thanks for coming.
Lauren: Exactly. So you get this tension. Founders need to move fast. Security teams want the up. The opposite. I've watched both people in the room and they are not incentivized the same way. I've watched that misalignment wreck shipping timelines.
Derek: I watched this pattern at big tech repeatedly. The competitive window collapsed. What took three years to copy five years ago now takes weeks. Any startup that said we own your workflow had to fight the platform team cloning it, making it free, and the startup's moat just evaporated.
Lauren: So if you're building an AI coding agent now, you know to... You know two things—your UX is copyable and the cloud provider can squash you if you get traction. I've watched that movie before. It does not end well for the startup.
Derek: Which means your real moat is unglamorous stuff: integration depth, reliability. Maybe you're the one vendor that doesn't nuke prod at three a.m. and ruins a customer's quarter. In this market, that's not a nice to have, that's the entire game.
Speaker 3: Meaningful.
Derek: And this is where Founder psychology gets gnarly. Early, the incentives scream, ship the flashy agent demo, go viral, raise Series A. The enterprise buyer quietly wants, prove you won't wreck my systems. I've watched good teams get crushed in that gap.
Lauren: And if this works even halfway, I'm not replacing senior engineers, I'm changing their job. They become reviewers, system designers, babysitters of a very fast, very confident intern. That's not just a culture shift, it's a hiring and incentive restructuring most teams aren't ready for.
Derek: But only if the buyer actually trusts the agent and trust doesn't ship in beta.
Lauren: Which loops us right into the next problem. If I'm going to let an AI touch production, I need to trust the company behind it did the boring homework. I've seen enough shortcuts in my career and I've had bored conversations about the ones that went wrong. to know when a startup is cutting corners on compliance.
Derek: You mean the have you actually locked this thing down homework?
Lauren: Exactly. Security, audits, the unsexy stuff, because if startups are already faking that on plain SaaS, and I've watched board conversations where that came up as a will address it later problem, imagine the temptation when they're selling agents with root access.
Derek: Yeah, for founders willing to fib on a compliance cert to close a deal. Still, what are they cutting corners on when it comes to an agent that can SSH into your servers? That's the question that keeps operators up at night.
Lauren: After the break, I want to zoom in on that, the Dell fake compliance mess, what SOC2 actually means, and how a buyer tells the difference between a safe agent and a very pretty landmine. This is where I see founders either build real moats or dig their own graves. I've watched both outcomes.
Derek: Good, because the trust story, the boring reliability and security work might be the only defensible moat these AI agents actually have.
Speaker 4: Right.
Lauren: Or saves a few real engineers from cleaning up catastrophic messes. I've been the person who had to run that cleanup. Never again. With that in mind, OK, we have to talk about Delve.
Derek: Oh man, yes, the fake compliance bomb. And I've seen this pattern before, not with agents yet, but with every wave where speed becomes the excuse for cutting corners.
Lauren: So get this, for anyone who missed the headlines, they were accused of claiming security certifications they did not actually have. I've watched this playbook before. Founders convince themselves the logo matters more than the substance.
Derek: That is not a rounding error. That is the kind of lie that gets board seats emptied.
Lauren: No, in B2B, especially with AI agents near code or legal data, those badges are basically you may now trust us with your crown jewels. I've sat in enough board meetings where a buyer's entire risk calculus hinged on seeing that checkmark.
Derek: Right. Break down the badges, though. SOC 2 sounds fancy. What is it in plain English?
Lauren: SOC 2. Who is basically an outside auditor checked your security and process controls. Things like access control, logging, how you onboard and off-board employees, how you handle incidents.
Derek: So it is, do you lock the doors, notice break-ins, and kick people out when they leave the company?
Lauren: Exactly. And ISO 27001 is similar, more global, but same idea. You defined how you protect data and an auditor verified you were doing the thing you wrote down. I wrote down.
Derek: So if you lie about that you're not just sloppy, you're telling customers we invited adults in the room and actually check things when the room is empty.
Lauren: Or worse, full of interns with root access. From an operator lens, Fake is a hard red line. Stuff breaks, models hallucinate, fine. But when you fake the guardrails themselves, that's a character problem. I've watched it destroy companies once the truth came out.
Derek: And a compounding risk, because other people then use your SOC 2 as their excuse to skip questions. That's how the lie spreads.
Lauren: Exactly; that's what scares me. These stratifications become a substitute for thinking; so a lie at the vendor quietly infects an entire chain of buyers; I've pattern matched this scenario before; it cascades in ways you cannot predict or unwind.
Derek: Okay, okay, okay. Contrast that with Harvey. The legal AI, giant law firm logos, sky-high valuation, and suddenly the compliance mode becomes a halo that nobody questions.
Lauren: Yeah! Harvey gets this huge price tag, every law firm logo on the slide, and suddenly every GC is like, if they are in, it must be safe. Logo-driven security is not due diligence.
Derek: And the risk is buyers start treating valuations in investor pedigree as a substitute. Up to date for actually asking hard questions: I've watched this play out-logo goes in the deck, Rigor goes out the window.
Lauren: Totally. I've been the investor in the room, and I can tell you, nobody checks.
Derek: Have you ever seen an investor pass on a hot round because the log retention policy was mid?
Lauren: Never. Not once.
Derek: Mm-hmm. Right. In dev tool land, I've watched this pattern a hundred times. Explosive growth, big users, you bolt on SOC 2 soon to the sales deck and everyone squints and ships anyway because the competitive window is... That was brutal!
Lauren: And then a deal like OpenAI buying Astral lands and every founder hears speed is life. I get the pressure, I've been there. But you cannot outrun a compliance disaster. I've watched it happen.
Derek: Yep, you think if I just survive long enough to get acquired or close a Series A, the compliance stuff becomes someone else's problem. That's the trap: founders mistake speed for strategy.
Lauren: But the boring stuff is the blast radius limiter. If your aging can catch prod or client contracts, those missing controls decide whether a mistake is a support ticket or a front page story. I've watched that gap firsthand.
Derek: So let me ask, do you think this is mostly bad actors doing a conscious con or good teams crushed by the velocity of the market?
Lauren: I think there are a few outright liars, and a lot of people quietly convincing themselves that 'in progress' is close enough. That is still on leadership. I've sat across the table from both types. The conscious choice is what haunts you.
Derek: You do not accidentally put a fake certificate on your home page. That's where it gets dark.
Lauren: Exactly. Someone uploads that image, someone writes that line in a deck. Those are conscious acts. Axe.
Derek: OK, practical mode: if I'm a buyer and I don't have a CISO reading every line item, what's the real move here? What actually protects me?
Lauren: Two super simple checks: first, ask for the official SOC 2 or ISO report letter, not just the logo. It will have a firm name and a date. Any team worth your trust will hand it over without hesitation, and their willingness tells you something.
Derek: And if they will not share it, you have your answer.
Speaker 3: RADAR.
Lauren: Pretty much, if they won't show it, something is off.
Derek: Second check?
Lauren: Ask, "Who's your security owner and when was your last third party pen test?" You're not grading technically, you're listening for whether they have a person, a cadence and a clear story. I've done this enough times to know when the answer is real.
Derek: So if the answer is, "Our engineer sort of handles that," you walk.
Lauren: Or at least you pause. Ask, "How do you revoke access when someone leaves? What logs do you look at when something seems off? Real teams have muscle memory stories, shaky ones hedge, and I've learned to trust that instinct.
Derek: You're not trying to be an auditor, you're just poking the facade to see if there's a building behind it.
Lauren: Exactly; trust the humans more than the badges. The honest teams will light up when you ask; the shaky ones'll get weird.
Derek: And in this AI agent moment the boring follow up question might be the thing that stops a disaster before it's a front page story.
Lauren: Or Your Customers
Derek: Or both.
Lauren: So the thing sticking with me is that caregiver-founder story. we can hack for their care and suddenly they're staring down FDA playbooks and liability committees. I've been in that inflection point when your side project becomes someone's actual health care.
Speaker 4: Yeah, that this is not a Chrome extension moment. I've watched that transition happen over and over at big tech. You start as a weekend hack, then you're touching real patients, and suddenly you're not building a toy. You're building infrastructure with actual stakes.
Lauren: Exactly. One line takeaway for me, if AI is touching people's health, money, or jobs, you're not building a toy. You're building infrastructure with actual stakes. I've pattern matched this wrong too many times to ignore it now.
Speaker 4: And infrastructure has receipts. SOC 2, real audits, someone who can actually answer who is on the hook when this breaks. That's the line between serious operators and everyone else squinting and hoping.
Lauren: Warmly, if this gave you something to argue about with your team, hit follow, drop a quick review, and share it with that one reckless founder friend. Those arguments are where real companies get built.
Speaker 4: And if you've got a wild AI story or a guest we should absolutely grill, send it our way. Tag us, drop it in. We're always hunting for the gnarly stuff.
Lauren: Thanks for hanging out with us.
Speaker 4: New episodes every Wednesday.
Lauren: Brightly see you next time on Tech Insider Weekly