Full Transcript
ANDREW: Good morning, Morning Signal listeners! It is Thursday, April 10th, 2026. Andrew here, ready to dive into the markets.
AVA: And Ava, reporting for duty on the tech and geopolitics front. Andrew, what a week... it feels like the tech world is simultaneously building the future and tripping over its own feet.
ANDREW: You’re not wrong, Ava. There's a real tension brewing, especially around AI. But before we get there, let's ground ourselves a bit in some traditional market dynamics. We heard some fascinating insights over on **Planet Money** this week about the economics of book publishing.
AVA: Oh, I caught some of that! Always a good reality check on the margins of older industries.
ANDREW: Exactly. The report really highlighted how independent bookstores operate on "thin margins." Inventory management is absolutely critical there. Order too few books, and you're depriving yourself of "much needed financial lifeblood." Order too many, and you're "clogging up coveted shelf space." It's a classic inventory dilemma, but with physical constraints.
AVA: Right. And it's not just the bookstores. Over on **Planet Money**, they also talked about how publishers largely "assume the risk for books that do not sell" due to a "returnable" business model. That dates all the way back to the Great Depression, apparently, to keep bookstores solvent.
ANDREW: That’s wild, isn't it? A 90-year-old business model still shaping an industry. Bookstores typically pay "50% to 60% of the list price" to publishers, and those freight costs for returns? Turns out they're a pretty minor expense overall.
AVA: Hmm. So publishers are essentially subsidizing the risk for retail, which… I mean, good for the bookstores, but that must make publishers incredibly conservative.
ANDREW: You'd think, but **Planet Money** noted that publisher confidence in a book's potential is actually signaled by the "size of the first print run." A hundred thousand copies? That indicates a strong belief in sales. They even had this great line: "the better the baby's debut, the higher its lifetime potential earnings are likely to be."
AVA: That makes sense. It's like a big marketing push from the get-go.
ANDREW: And they're going everywhere with it. **Planet Money** also mentioned major publishers like Norton pursue "everything" in terms of distribution for key titles: chains, independents, "airport stores," "bookshop.org," "Amazon.com," "libraries," "gift shops," "military bases," and even "cruise ships." It's an omnichannel strategy on steroids.
AVA: Well, if you’re taking that much risk, you better be casting a wide net. I also saw that **Planet Money** highlighted the international markets as a huge focus. Orders from "Thailand, Singapore, even Malaysia," and translations slated for "Korean" and "Chinese" markets. The global reach of a successful title can be immense.
ANDREW: Absolutely. Diversifying revenue streams wherever possible. Speaking of global reach and... shall we say, less conventional revenue streams... did you happen to catch the **Lex Fridman Podcast** where they were delving into Viking history?
AVA: Oh, the Viking deep dive! I did. Very much my kind of rabbit hole.
ANDREW: What struck me from a macro perspective was how they described medieval monasteries as "some of the richest places in Europe" despite the monks' vows. They essentially served as de facto gold storage, making them prime targets for Viking raids.
AVA: That's classic economic strategy, right? Go where the wealth is concentrated and undefended. And the "Danegeld" payments, like King Ethelred’s 7.5 million silver pennies... which, **Lex Fridman** calculated as equivalent to "48,000 pounds of silver" or "20 tons of gold and silver" over his reign. That only encouraged more attacks.
ANDREW: Exactly. A tax on weakness. A terrible macro incentive. And a key driver for all that expansion and raiding? **Lex Fridman** cited "overpopulation" in Viking lands as a macro demographic driver. Resource scarcity at home drives outward expansion and, in this case, aggressive wealth acquisition.
AVA: It's a pattern we've seen throughout history, honestly. Overpopulation, resource scarcity, and then expansion. It certainly wasn't about building a better society at home; it was about survival and opportunity elsewhere.
ANDREW: Right. Now, bringing it back to modern markets and the central theme you mentioned earlier, AI's contradictory impact. Over on **Hard Fork**, they noted something fascinating about Anthropic. Their revenue "increased significantly" after publicly taking an ethical "safetist" stand "against the Pentagon" on certain AI applications.
AVA: That's a powerful statement in the market, isn't it? Choosing ethics over immediate, potentially lucrative, defense contracts.
ANDREW: It really is. It indicates that taking a strong ethical stance can actually be a positive market differentiator in this new, rapidly evolving space. But on the flip side, **Hard Fork** also reported a Microsoft executive privately suggested Sam Altman might eventually be remembered as a "Bernie Madoff or Sam Bankman-Fried level scammer."
AVA: Oof. Harsh. That's illustrating some severe reputational risks for AI leadership, especially with the speed at which these companies are growing.
ANDREW: No kidding. And when you look at Altman's reach, **Hard Fork** cited his "investment portfolio" includes "about 400 other tech companies." That's an extensive influence in Silicon Valley, which just amplifies any reputational issues, good or bad.
AVA: Wow. Four hundred? That's... a lot of companies for one person to be invested in while also leading one of the most important tech companies in the world. It raises some questions about focus and potential conflicts, doesn't it?
ANDREW: It absolutely does. Now, shifting gears slightly, from one tech giant to another, the **a16z Podcast** had some amazing insights into Apple's journey. They highlighted that Apple's market share in new computers fell below "3%" in 1997.
AVA: Wait, really? That low? That's almost hard to imagine now.
ANDREW: Almost. But since then, it's risen to "30 plus percent on the global share." And the **a16z Podcast** attributed that largely to ecosystem lock-in, driven by innovative devices like the iPod, and then products like the MacBook Air, which they called a "$1,000 product" that really hit a sweet spot.
AVA: The Air was definitely a game-changer for me. And they also talked about how the iPad, often underestimated, now "sells more units than North America laptops." That's pretty wild. It found an unexpected market fit.
ANDREW: Exactly. And the **a16z Podcast** also spilled the beans on Apple's rumored new $600 "Neo" laptop. It leverages "phone chips" with "non-recurring engineering costs" already "paid for a hundred thousand times over."
AVA: That's a huge cost advantage.
ANDREW: Absolutely. It gives them a significant edge over traditional PC OEMs, who are still operating under a multi-vendor model that, as the **a16z Podcast** put it, "works against having high quality and low price."
AVA: That's a classic example of vertical integration giving you an edge over a fragmented ecosystem. Though, on the other side of that, the **a16z Podcast** did mention the "Apple Vision Pro didn't sell as much as people anticipated." So, not every gamble pays off immediately in the AR/VR market.
ANDREW: True, even Apple has its misses. But the **a16z Podcast** really dug into the core philosophies. They said the value corporations and enterprises see in Windows is its "legendary compatibility." But that compatibility also leads to vulnerabilities, fragility, conflicts, and "really bad battery life."
AVA: And Apple's strategy is the opposite. The **a16z Podcast** characterized it as "continual renewal" and deprecating older APIs yearly. That allows them to remain competitive and modern. It’s a pain for developers sometimes, but it clearly works for market share.
ANDREW: It's a strategic trade-off, isn't it? Compatibility versus constant innovation. And on that note, let's shift over to you, Ava, for all the fascinating tech developments that are less about market share and more about reshaping the world.
AVA: Thanks, Andrew. And actually, the top story today really drives home some of those core tensions. Over on **Hard Fork**, they unveiled Anthropic's new "Claude Mythos Preview" model, and it's frankly astonishing. This model is capable of autonomously finding "vulnerabilities in every major operating system and web browser."
ANDREW: Every major OS and browser? That’s... unsettlingly comprehensive.
AVA: It is. **Hard Fork** specifically mentioned it found a "27-year-old security flaw in OpenBSD" and a bug in "FFMPEG" that had been missed after "five million scans" by automated tools. It’s not just finding them, though. This model can "chain together exploits" with a "speed and efficiency that no human security research team could do."
ANDREW: That truly sounds like it could lead to a "forced reset for the entire cybersecurity industry," as **Hard Fork** put it. The implications for defense, let alone offense, are massive.
AVA: Exactly. But here’s the kicker, Andrew. Anthropic is "not releasing this model to the public because they claim it is too dangerous." Instead, **Hard Fork** reported they're providing limited access to a consortium of tech companies—excluding OpenAI and Meta—for "defensive cybersecurity testing."
ANDREW: "Too dangerous"... that phrase rings a bell. It feels like a repeat of when OpenAI initially withheld GPT-2, doesn't it?
AVA: It absolutely does. **Hard Fork** specifically called it a "significant gap" between internal AI capabilities and public access, reminiscent of GPT-2's initial withholding. It forces us to ask: what else are they developing internally that we don't even know about?
ANDREW: That’s a deep question, Ava. On a slightly lighter note, shifting from existential threats to practical applications, the **Lex Fridman Podcast** had some interesting discussions around AI productivity tools. Lex himself reported "increased significantly" personal productivity with "many agents running on many machines in the cloud."
AVA: That’s what we’re all striving for, right? AI as a force multiplier. And to that end, **Lex Fridman** mentioned "Lared In," a platform that helps organizations "understand how AI is being used across their businesses" to measure productivity. Then there's "Finn," which focuses on "AI agents for customer service," highlighting the challenge of human-AI collaboration in complex problem-solving.
ANDREW: That Finn example, in particular, gets at the tricky parts. It’s not just about building the tech; it's about integrating it effectively with human workflows.
AVA: Precisely. And speaking of incredible engineering, **Lex Fridman** also gave a huge shout-out to Shopify for their "incredible" engineering, particularly in optimizing "GraphQL list queries." They achieved "15x faster field level execution, less GC overhead, [and] four plus seconds off P50" latency. That’s a seriously impressive feat of optimization.
ANDREW: That is some serious engineering muscle. Now, you mentioned earlier Apple and Microsoft's contrasting strategies. The **a16z Podcast** dove deeper into this, characterizing Microsoft as a "culture of technologists solving technology problems," leading to massive scale.
AVA: Whereas Apple is an "artist" culture focused on "taste" and consistent annual product updates. It's a fundamental difference in their approach to innovation and product development. But the **a16z Podcast** also pointed out that API conflicts, specifically between "DirectX" and "Nvidia graphics APIs... CUDA," have "held Microsoft back from AI on the desktop," making this domain "Linux or Mac-centric."
ANDREW: That’s a major strategic disadvantage for Microsoft in the AI race, isn't it? Especially for those local, on-device AI workloads.
AVA: Absolutely. It shows how even legacy architectural decisions can have massive downstream effects on future capabilities. Now, let’s go even deeper into the tech and innovation rabbit hole. The **No Priors** podcast recently discussed the Large Hadron Collider at CERN and its "huge discoveries," including the "Higgs particle."
ANDREW: The LHC. A marvel of engineering.
AVA: It truly is. But to explore the formation of "dark matter" in the early universe, **No Priors** estimated an energy scale "10 million times higher" than the LHC is needed. That’s just mind-boggling scale.
ANDREW: That makes the current power of AI almost seem quaint by comparison.
AVA: Almost. And to tackle these kinds of questions, **No Priors** delved into string theory, which proposes that all fundamental particles, including "dark matter" and "dark energy," could be "tiny loops of string" playing "different harmonics." A beautifully elegant idea.
ANDREW: A beautiful, elegant, and almost impossible-to-test idea.
AVA: For now, yes. And then, there’s the Heisenberg uncertainty principle. **No Priors** explained that it implies "you can never have a quantum completely vacuum," suggesting that "nothing is something" and vacuum energy is inherent to space. It redefines what we mean by empty space.
ANDREW: That’s wild. So, the vacuum of space isn't really a vacuum at all.
AVA: Nope, it's teeming with potential. And **No Priors** also touched on the speculative existence of "extra spatial dimensions" that could "manipulate the strength of gravity" and potentially lower the energy threshold for creating "microscopic black holes" in colliders.
ANDREW: Microscopic black holes... that sounds like a good segue into geopolitics and the ethical tightrope we're walking with powerful technologies. Take it away, Ava.
AVA: You got it, Andrew. That idea of microscopic black holes actually ties directly into the ethical tightrope, but let's start with the immediate geopolitical challenges. The development of advanced AI models like Claude Mythos Preview, which we just discussed, highlights that model development "remains essentially unregulated in this country," according to **Hard Fork**.
ANDREW: And that’s a massive problem, especially when you consider the US government's own contradictory stance.
AVA: Exactly. The US government has declared Anthropic a "supply chain risk" and ordered "all federal agencies to stop using Claude." Yet, as **Hard Fork** pointed out, Anthropic's most advanced cybersecurity model is not accessible to the government for national security purposes.
ANDREW: That’s such a profound disconnect. The government is concerned about AI security risks, and rightly so, especially with reports that "Iran is currently hacking our critical infrastructure." But they can’t access the best defensive tools because of their own regulatory declarations and Anthropic's internal safety concerns.
AVA: It creates a critical national security vulnerability. And **Hard Fork** emphasized the risk would be amplified if a "Mythos quality model fell into their hands." This whole situation is a paradox, and it underscores the urgent need for a cohesive regulatory framework that balances innovation, safety, and government access. We’ve even seen previous administration attempts at AI regulation discarded to promote "American competitiveness," illustrating this tension between safety and economic advantage.
ANDREW: It's a really difficult balance to strike, but the current situation feels... particularly exposed.
AVA: It does. Now, let's take a slight historical detour to see if past geopolitics offer any parallels. The **Lex Fridman Podcast** deep dive into the Viking Age dated its start to 793 AD with the raid on Lindisfarne. They really broke down Viking military and geopolitical strategy.
ANDREW: I was struck by their use of "terror as a main weapon." **Lex Fridman** noted they strategically attacked "high holy days like Easter [and] Christmas" for maximum impact and wealth.
AVA: Incredibly shrewd and brutal. And they engaged in "espionage," gathering intelligence as "traders" before returning as raiders. Plus, their longships provided a decisive military advantage with speeds of "70 to 120 miles a day," significantly faster than land armies.
ANDREW: That’s a truly overwhelming strategic advantage. It reminds you how much technology can shape geopolitical power.
AVA: Absolutely. And **Lex Fridman** discussed the "Great Heathen Army" in 865, which was a large, decentralized coalition demonstrating "meritocracy" in its leadership, not just hereditary rule. Their expansion often evolved rapidly from raiding to "state-building" and establishing "trade routes."
ANDREW: So it wasn't just mindless pillaging; there was a broader strategic objective at play.
AVA: Exactly. And the settlement in France that became "Normandy" saw rapid cultural assimilation. **Lex Fridman** noted that "Viking language," "names," and "worship of Odin" were replaced by "building churches" and "marrying into the local aristocracy" within a generation. It’s an incredible example of cultural integration and state-building.
ANDREW: It really is. And on the other side of that coin, the podcast pointed out that Charlemagne's vast empire, though wealthy, was "sprawling," "hadn't been thought through," and had "terrible communication," making it "wealthy and weak" and thus attractive to predators like the Vikings. A valuable lesson about the perils of overextension without robust infrastructure.
AVA: A perfect historical parallel to modern-day geopolitical vulnerabilities. Now, connecting back to our earlier discussion on frontier physics and public fear, the **No Priors** podcast mentioned that public "injunctions were taken out by people to try to stop the Large Hadron Collider from turning on" due to fears of creating "mini black holes" that could "consume the Earth."
ANDREW: That's the ultimate 'what if' scenario, isn't it? The public's fear of the unknown.
AVA: It really is. Scientists argued the risks were theoretical, as Hawking Radiation would cause them to "evaporate too quickly." But it highlights that fundamental ethical tightrope: pushing the boundaries of science for discovery, while also navigating legitimate public anxieties about potentially catastrophic outcomes. Sound familiar to the AI safety debates we’re having today?
ANDREW: It absolutely does. The pattern of public fear, scientific reassurance, and the challenge of balancing innovation with perceived existential risks is clearly not new. It’s just now being played out with lines of code instead of particle beams.
AVA: Exactly. And that's really the cross-cutting theme of the day, isn't it? The profound and often contradictory impact of advanced AI on national security and corporate ethics. It's truly creating significant market and geopolitical tensions.
ANDREW: It sure is. So, let’s quickly pivot to "Things to Watch" for the rest of the week, Ava. What's on your radar?
AVA: Well, I'm definitely keeping an eye on the **Cybersecurity Industry "Reset"**. The implications of AI models capable of autonomously finding and chaining exploits are so massive that it could lead to widespread patching and re-engineering efforts. So, watching for how that begins to unfold over the next 6-12 months, as **Hard Fork** suggested, is key. And also, how that tension around **AI Leadership Trust & Governance** plays out, especially after those allegations from **Hard Fork** regarding Sam Altman. Any further developments there could significantly impact investor confidence. What about you, Andrew?
ANDREW: For me, it's all about Apple and Anthropic. First, the **Ongoing Anthropic AI Access Debate**. Given its "supply chain risk" designation, and Anthropic's "too dangerous" stance on Mythos, I'm watching for any policy shifts regarding the US government's access to advanced AI models. That's a huge national security and market issue, as **Hard Fork** highlighted. And then, definitely monitoring **Apple's "Neo" Laptop Market Impact**. The **a16z Podcast** outlined how it leverages phone chips for a cost advantage, and that could further disrupt the traditional PC OEM market mid-2026. If it lands as successfully as Apple hopes, it could reshape that segment dramatically.
AVA: Fascinating stuff all around. Thanks for joining us for another Morning Signal, everyone.
ANDREW: Always a pleasure, Ava. We’ll be back tomorrow with more.
AVA: Until then, stay curious.