Lauren: Peace.
Derek: Okay, okay, okay. Welcome back to Tech Insider Weekly.
Speaker 3: Hey everyone, good to be here with you.
Derek: So today the AI money hose is fully open and Q1 funding for AI startups went kind of off the rails.
Speaker 3: Oh man, seed rounds that look like Series Bs, chips tied to Nvidia, defense names like Shield AI soaking it up.
Derek: Right. And the question is, inside real companies. What happens when AI will replace human type hits messy org charts and legacy systems?
Speaker 3: Spoiler, the nine AI employees that one startup bragged about are not showing up to stand up.
Derek: Yeah, yeah. We'll sort out what agent swarms are actually good at and where that crosses into the future of work.
Speaker 3: And then Siri, because Apple quietly building this very controlled, rented AI hub is a huge tell about where assistants go next. Go Next."
Derek: Plus, the same need for control and audit trails is bleeding straight into defense, AI on drones, ships, and systems that cannot hallucinate targets.
Speaker 3: Which sets up our last segment, tiny teams using off-the-shelf parts to build serious defense hardware and the ethics of autonomous weapons.
Derek: Yeah, founders drawing red lines before the term sheet shows up. I mean, that matters.
Speaker 3: All right, funding frenzy first. Who's raising, what's real, and where this could break.
Derek: Let's get into segment one, AI startups and venture capital. Okay, okay, okay. So get this. Crunchbase has global venture funding for Q1 brushing up near 300 billion dollars.
Speaker 3: Dude, 300 billion. That is silly money.
Derek: And a huge chunk is AI. What blows my mind is seed rounds. You've got pre-revenue AI startups raising like 8 to 10 million at 50 or even 100 million caps.
Speaker 3: That's the part that feels like a meme from the engineer side. You're still duct taping models to a Postgres instance and someone's like, congrats, you're worth nine figures.
Derek: Right. So the question is, what are investors actually paying for? Because it's not current revenue.
Speaker 3: Yeah, so are they buying moats or vibes?
Derek: Vibes are definitely in the deck, but I'd say they're thinking they're buying three things, data, distribution and talent.
Speaker 3: Walk me through that, because a lot of these seed AI teams are three people in a demo.
Derek: Exactly. Data means access to a weird proprietary data set. Distribution means a way to bolt AI into an existing workflow and spread fast. Talent is we backed the ex-DeepMind folks before anyone else.
Speaker 3: The problem from the builder's side is those advantages decay fast. Your secret prompt trick lasts what? What, two weeks before Twitter clones it?
Derek: Totally, which is why hardware has become this separate hype ladder. You see Nvidia putting money into AI chip startups like Rebellions and suddenly valuations jump because it looks more defensible.
Speaker 3: Right, right, right. With chips, investors feel like there's an actual wall. You need fabs, you need design talent, you need years.
Derek: But even there, I wonder how much is we want exposure to anything near Nvidia. versus deep conviction that this startup's chip wins.
Speaker 3: So it's like buying merch at the Nvidia concert. You're not sure about the opening band, but whatever, you want the shirt.
Derek: Exactly. And then you've got companies like Shield AI. Their latest round priced them around $12.7 billion. That is not vibes. That is real defense contracts, real hardware, real deployments.
Speaker 3: Yeah, that one feels different. They're building actual autonomous systems. they're in live missions. You can at least draw a straightish line from contract value to that price tag.
Derek: And that's the subtlety. High valuation doesn't automatically mean fluff. If you've locked in multi-year government deals, the math can backstop a big number.
Speaker 3: But then founders see that and go, oh, cool, any AI thing could be a $10 billion company if I put defense or infrastructure in the deck.
Derek: Sprinkle in dual use and you're done.
Speaker 3: Wait, wait. So from the operator's side, how scary is this? You've run startups through normal-ish markets. What happens when the valuation gets way ahead of the actual business?
Derek: It's like strapping a rocket to a scooter. Feels fun until you hit a tiny bump. If growth slows or that big contract slips by even a quarter, the whole cap table gets stressed.
Speaker 3: Because the next round either has to be even bigger or you're staring at a at a down round and morale hell.
Derek: Yep, teams start optimizing for valuation theater instead of product, fancy announcements, aggressive hiring, and then suddenly you're doing layoffs because the numbers never caught up.
Speaker 3: From the engineer chair those whiplash cycles suck. One month you're told we are the chosen AI unicorn, next month you're updating your LinkedIn.
Derek: And on the defense and government side, the risk is even shorter. Even sharper, if a policy change hits or a procurement officer gets cold feet, your anchor customer can vanish.
Speaker 3: Plus, ethically, you're tying your company's fate to, like, drone programs and surveillance budgets. That is a very specific bet.
Derek: So the real question for founders right now is do you actually want the unicorn price tag this early or is a calmer trajectory safer for the company and the people inside it? I did.
Speaker 3: And from the investor's side, are you backing a durable business or are you basically funding a very fancy demo and hoping the story becomes true later?
Derek: Which makes me wonder about the next wave. If you can raise huge rounds on the promise that AI agents will replace half your own team,
Speaker 3: jumping in, then what happens when those AI employees don't perform like the deck promised?
Derek: Exactly. How far can you push the story that software will do the work of humans? humans before that gap becomes impossible to ignore.
Speaker 3: And when that gap shows up inside the company, inside the org chart, what does that do to how we work every day?
Derek: Building on that, picture this, a founder wakes up, opens Slack, and there are nine green dots online.
Speaker 3: Yeah.
Derek: All people named things like Dev-A3 and Ops-Bot-2 already committing code and filing tickets.
Speaker 3: Dude, I saw that OpenClaw deck. We have nine AI employees. Okay, okay, okay. Do they really?
Derek: Right. On paper, it sounds like they replaced half a dev. dev team; in practice I'm guessing it's one human plus a swarm of scripts and agents glued together with duct tape.
Speaker 3: Totally. The story I heard is they wired OpenClaw agents to GitHub Jira.
Lauren: Notion, even the CRM,
Derek: Wow.
Lauren: the bots open issues, propose pull requests, ping each other when builds fail.
Derek: So get this, my ops brain hears that and goes, cool automation. My investor brain hears, you're counting cron jobs on your headcount slide?
Lauren: Yes, yes, exactly.
Derek: And that matters because this founder then tells investors I run a nine-person product org with one salary.
Lauren: Which sounds amazing until you ask, okay, but who decided what... What to Build
Derek: There it is.
Lauren: Agents are great at like, here's a spec, now write the tests, refactor the boring stuff, fix flaky CI But product judgment, priorities? That's still a very human bottleneck.
Derek: And coordination: once you have a bot army, someone has to be the general. Who's checking that Agent Four didn't just undo Agent Three's fix and introduce a security hole?
Lauren: Security is the one that scares me. You're giving these things repo access, maybe production credentials. credentials. One mis-scoped permission and your AI employee wipes the database.
Derek: Or happily emails customer data to the wrong place because the prompt said, share this with the team.
Lauren: Right.
Derek: So from the funding side, here's what's wild. That OpenClaw-style company raises, say, a $65 million seed. The deck is all about swarms of agents replacing whole teams.
Lauren: For a seed, that's movie money.
Derek: Exactly. The milestone isn't hire 30 people, it's prove that 10 humans plus 100 agents can ship like 100 humans.
Lauren: And investors are like, cool, show me the chart. Are your deploys faster? Are bugs dropping? Is revenue per actual employee ridiculous?
Derek: And are you cutting real cash burn? If you're bragging about nine AI employees, but still hiring the same headcount as everyone else, you just built an expensive Rube Goldberg machine.
Lauren: The worst version is they raise that monster seed. Then hire a giant human team to babysit the bots.
Derek: Oh man, I've seen the early signs. We need a head of agent ops. Translation, someone to wrangle the chaos our marketing promised.
Lauren: Okay, but to be fair, there is real value here.
Derek: Totally.
Lauren: For grunt work, agent swarms are incredible. You want to migrate a thousand tickets, clean a messy CRM, generate test data, hammer your API for QA. Agents do not get bored.
Derek: That's why I'd actually underwrite it. Show me you can compress the mundane work so humans are in creative, relational, strategic mode.
Lauren: The breakage happens when founders pretend bots can handle messy, cross-functional stuff. Have the agent do product discovery, talk to users, negotiate a contract. No.
Derek: Or let agents self-organize. In real companies, even humans struggle with that.
Lauren: Yeah, yeah, exactly.
Derek: Here's what I ask in those pitches. One, what human roles are you not hiring because of agents? Two, where are the hard stops where a person must approve?
Lauren: And three, who owns the failure when an agent messes up because the AI did it is not going to fly with your board or your customers.
Derek: Or regulators. So, zooming out, all these bot armies have to live somewhere. They run on someone's models, someone's devices, someone's platforms.
Lauren: Yeah, that control point gets really interesting when you think about assistants. If your AI employee is, say, living inside your phone and talking through an assistant, who's actually in charge?
Derek: That is exactly where Siri suddenly matters again. If Siri becomes the front door for your agents, Apple decides which bots get in, which models they can use, how much data they see.
Lauren: And if Apple is now cool with third-party brains inside their assistant, that shifts the power balance between the platforms and these AI-first startups.
Derek: So speaking of control, next we should talk about what happens when the biggest gatekeeper in consumer tech opens the door to rival models.
Lauren: And whether that makes Siri finally useful in this new AI world or just a very pretty traffic cop.
Derek: Shifting gears, basically Apple woke up one day and said, fine, we'll rent the brains instead of build them.
Lauren: Oh man, yes. They turned Siri into this weird little hostel for chatbots. Welcome, travelers! Pick a bunk. Don't break my privacy policy.
Derek: Exactly. Strategically, they're doing two moves at once. First, let other models live inside Siri. Second, turn the App Store into an AI mall where those models sell you experiences. chances.
Lauren: Right, so get this. Technically, it's like Siri becomes the front desk. You ask a question, Siri figures out which guest model to tap, hands off your request, then cleans up the answer before you see it.
Derek: And that routing layer is the whole power play. Apple decides which models are allowed upstairs, what data they see, and how fast they can move.
Lauren: Yeah, and where it breaks is that Siri has always been kind of timid. Low stakes, super filtered. These external models are chaotic. Hallucinations, weird opinions, code that can actually touch your stuff.
Derek: So basically Apple is saying we'll let the chaos in, but only through a tiny padded doorway.
Lauren: Mm-hmm. With a security guard, metal detector, and three parental controls.
Derek: Exactly. But I think that's the bet. They don't need the most powerful model. They need the safest feeling one that's... That still does enough to keep you on iPhone.
Lauren: And that's where they contrast hard with Google and OpenAI. Google wants you inside its own model universe. OpenAI wants you in its app world. Apple wants you to stay on the phone, then quietly rents whichever brain you ask for.
Derek: Which is classic Apple: own the rails, tax the traffic. If Siri becomes the default dispatcher, then all these AI companies become App Store developers again.
Lauren: But do they actually win mind share that way? Because when someone has a wild chat with an assistant, they say, "ChatGPT told me," not "My iPhone told me.
Derek: Yeah, that's the tension: Apple might own the gate, but the story in people's heads belongs to the model brand.
Lauren: Or the opposite: the model's commoditized and the only thing that matters is whose device is easiest and most private to use them on.
Derek: And Apple is happy in a world where models are commodities. Think of it this way. An AI startup today launches a standalone app. In Apple's new world, they're more like a plug-in. They expose capabilities that Siri and other apps can call.
Speaker 3: Right.
Lauren: So instead of one AI employee app, it's like specialized microbrains wired into Calendar, Mail, Notes.
Derek: Exactly. And Apple can charge them for placement, subscriptions, maybe even for system-level hooks. hooks. It's SaaS, but the platform sits between you and your own user.
Lauren: That's rough for startups, though. If Apple controls the routing, they can always say, oh, users like scheduling agents now. Cool, we'll build our own light version and set it as default.
Derek: Yep, same story.
Lauren: Story is the early App Store. Great place to grow, dangerous place to depend on.
Derek: So where do you land on renting AI as a strategy? Smart or too timid?
Lauren: For Apple's incentives, it's smart. They protect hardware margins, lean on others for research, and still look modern. But if they move too slowly, users might form habits with competitors.
Derek: I'm a little harsher. If you're the most valuable phone company on Earth, on Earth and your assistants still gets dunked on by memes, maybe you build something bold instead of endlessly curating everyone else's.
Speaker 3: FAIR.
Lauren: But here's where being the careful gatekeeper gets attractive: big buyers who care about control. If Apple can prove it can host dangerous models in a very constrained, auditable way, that story rhymes with what governments and militaries are asking for.
Derek: You're talking about the people who buy fleets of planes and ships, not iPads.
Lauren: Exactly. And on the other side of this, you've got companies like Shield AI wiring AI straight
Speaker 4: into battlefield drones.
Lauren: I strayed into hardware that moves in the real world.
Derek: Phones, then fleets; consumer polish meets military contracts. This is where the stakes jump way up.
Lauren: Oh, okay, okay, okay. Shifting gears, we have to talk about how weird the money has gotten into defense.
Derek: Oh man, here we go.
Lauren: We started this episode with AI unicorns, but in defense, you've got Shield AI held up as the example of yes, this is expensive, but at least it has contracts and missions behind it.
Derek: Right, like the pitch is not vibes, it's this swarm flew real missions and the Pentagon keeps calling back.
Lauren: Exactly. And then right next to it, you've got Saronic doing all this. Doing autonomous boats and the AI Robotics Lab raising at an $11 billion price tag for robotics and autonomy.
Derek: For a lot of founders, that's the moment of, wait, I can build sci-fi hardware and still get software multiples?
Lauren: Yeah, 10 years ago if you said I'm building drone swarms for the Navy, people pictured Lockheed lifers in a windowless office. Now it's six people from SpaceX and Anduril in a WeWork wiring off-the-shelf cameras to a consumer GPU stack. Dude.
Derek: With a dog and a broken espresso machine?
Lauren: But seriously, what changed is the ingredients. You've got commercial models, cheap simulation engines, and hardware that is basically a flying Linux box.
Derek: So software teams suddenly don't need a thousand-person defense contractor. A tiny team can write the autonomy stack, test it in simulation, then push it to a drone almost like a mobile app update.
Lauren: And software-defined hardware sounds buzzy. But it means the ship is a platform. You can reprogram missions, behaviors, even who it listens to from a laptop instead of welding new metal, which is catnip for investors. We get hardware defensibility with SaaS margins.
Derek: Right.
Lauren: Right. But here's the part I keep coming back to. That same push a new build mindset now applies to weapons. You're shipping a patch and suddenly the drone is allowed to fire in one more edge case. Yes.
Derek: Yeah,
Lauren: I Do you buy that?
Derek: think it's emotionally convenient. You tell yourself, if we don't, someone worse will, or we're making it more precise. Those are real arguments, but also coping frameworks.
Lauren: And the money reinforces the story. If somebody hands you a giant term sheet to fix defense, your brain really wants the hero narrative to be true. Exactly. Founders say, my choice isn't war or peace. My choice is dumb weapons or smarter ones that maybe kill fewer people.
Derek: That maybe is doing a lot of work.
Lauren: Yeah, because once you have swarms of autonomous ships, things get messy. Imagine thirty uncrewed boats each running slightly different models after last night's update all staring at the same radar blip.
Derek: One interprets it as a hostile craft, another calls it a fishing boat, a third flags ambiguous but hostile. Dole. Who breaks the tie?
Lauren: And if there's a human in the loop, are they actually in it or are they a tired officer with three seconds to rubber stamp what the system recommends?
Derek: The UI question is basically a moral question.
Speaker 5: Right.
Lauren: Yes, put a big red override button in the corner and maybe the human pauses, bury the controls in five menus and you've nudged them into letting the machine decide.
Derek: Softly We've design ethics as foreign policy.
Lauren: That's the part founders underestimate. They think they're making robots. They're actually making defaults for how force gets used when people are scared and tired.
Derek: And this is all happening inside giant government contracts with 30-year timelines. You can say we just delivered what was in the spec and no one owns the decision the ship made in a foggy harbor.
Lauren: So what do you tell the founder who really wants to build in this space? This space.
Derek: Yeah.
Lauren: I'd say write down your line now, before the first check, before the first deployment. What won't you build? Under what conditions do you pull the plug?
Derek: Almost like a living ethics README for your company.
Lauren: Exactly. Not for PR, for you. Because once the contracts hit, it gets harder to hear the small voice that says, Hey, this feature scares me.
Derek: And if you're going to give autonomy to fleets and swarms. The least you can do is keep some autonomy for your own conscience.
Lauren: Powers moving into code, the question is who remembers they're still responsible for what that code does? So, Derek, I'm still stuck on the nine AI employees founder. The bots opening tickets for each other like they're on payroll.
Derek: Dude, I know. It was like a sitcom about engineers starring Bash scripts.
Lauren: Right. And the real point there is simple. AI agents are power tools, not co-workers. You're still on the hook for judgment and accountability.
Derek: Exactly. If that line stuck with you, share this episode with a friend who keeps saying, the bot. Bots will handle it.
Lauren: Warmly, if you're enjoying Tech Insider Weekly, hit follow, drop a quick review, and send us the wildest AI startup pitch you've seen this month.
Derek: Tag us, email us, and carrier pigeons, whatever works. Just don't call them AI pigeons.
Lauren: Chuckling. New episodes every Wednesday, so stay tuned.
Derek: Right, closing. Thanks for hanging out with us.
Lauren: Take care, everyone.
Derek: See you next week.
Speaker 6: Mm-hmm.