Category Archives: NotebookLM

The Tata Motors Demerger: 3 Surprising Twists You Didn’t See Coming

image by author and Perplexity.ai

Introduction: The Split That Sparked More Questions Than Answers

Tata Motors, a household name in the automotive industry, recently split its operations into two separate, publicly traded companies: one for passenger vehicles and another for commercial vehicles. On the surface, this is a classic corporate move, a strategic demerger designed to “unlock value,” create more focused businesses, and enhance shareholder returns.

But what happens when a move designed for clarity creates unexpected risks and confusing questions? The reality of the Tata Motors demerger is far more complex and surprising than the official announcements suggest. This article explores three crucial insights that reveal the hidden intricacies of this major corporate restructuring, offering a deeper understanding for anyone interested in business strategy or the future of Tata Motors.

——————————————————————————–

1. Splitting Up Made One Half Weaker: The Jaguar Land Rover Paradox

A demerger is a controlled, deliberate act of corporate engineering, designed to create two strong, independent entities. The expectation was that both new companies would emerge with a clear focus and a solid financial footing. In fact, rating agency S&P Global initially expected the demerger to be “neutral” to the company’s credit rating.

The reality proved to be a sharp, counter-intuitive reversal. Immediately following the split, the new passenger vehicle company, Tata Motors Passenger Vehicles (TMPV), received a ‘negative’ outlook from S&P. The core reason is a matter of strategic concentration: with the stable commercial vehicle business spun off, TMPV’s earnings are now overwhelmingly dependent on its subsidiary, Jaguar Land Rover (JLR), which accounts for over 80% of its income.

This heightened dependency was dangerously exposed when JLR, an external entity, was hit by a chaotic event: a severe cyberattack that halted its global production for over a month. The financial fallout, as analyzed by S&P, directly weakens the new TMPV entity:

  • Projected revenue decline of 15%-18% for JLR in fiscal 2026.
  • Projected EBITDA margins are expected to fall to 3%-5% in fiscal 2026, a sharp drop from 7.6%.
  • Weakened credit metrics for TMPV, with its net debt to EBITDA ratio projected to rise to 2.5x-3.0x.
  • A dramatic fall in cash flow, with the Funds From Operations (FFO) to debt ratio projected to weaken to 15%-25%, down from over 100% in fiscal 2025.

The strategic dissonance is stark: a controlled corporate action designed to streamline operations has instead magnified the financial risk posed by a chaotic external event, making the new passenger vehicle company more vulnerable to disruptions at its luxury subsidiary.

——————————————————————————–

2. The Great Name Swap: Why ‘Tata Motors’ Isn’t What You Think It Is

The process of renaming the companies involved in the demerger appears, at first glance, to be needlessly convoluted. Consider these steps:

  1. The original, publicly traded company, Tata Motors Limited (TML), has been officially renamed Tata Motors Passenger Vehicles Limited (TMPV).
  2. The new, spun-off commercial vehicle business is named TML Commercial Vehicles Limited (TMLCV).
  3. The final twist: the plan is for TMLCV to eventually be renamed back to the original Tata Motors Limited.

This wasn’t an error, but a masterclass in corporate realpolitik. If the goal was to keep the iconic ‘Tata Motors’ name for the commercial vehicle business, why not simply spin off the passenger vehicle unit?

The answer lies in the hidden complexities of global M&A. Demerging the passenger vehicle business—which includes the UK-based Jaguar Land Rover—would have been far more complex due to international regulations. By renaming the existing listed entity and demerging the purely domestic commercial vehicle business, the company chose the path of least legal and logistical resistance. This calculated move reveals that for a global giant, the internal plumbing of a deal often matters more than the public-facing label, prioritizing operational simplicity over branding clarity.

——————————————————————————–

3. Your New Shares Arrived… With a Mystery Price Tag

For existing Tata Motors shareholders, the demerger brought a tangible but perplexing change to their portfolios. Shareholders on the record date of October 14, 2025, were deemed eligible, and the new TMLCV shares were credited to their demat accounts on October 16.

The problem was one of “phantom value.” While the new TMLCV shares appeared in shareholder accounts, they were listed under an “inactive stocks” category with no price assigned. This created a period of uncertainty where investors held an asset of unknown worth. In the absence of an official price, the market itself derived an implied value for the new commercial vehicle shares. Here’s how:

  • The pre-demerger closing price of the original Tata Motors was ₹660.75.
  • After the demerger, the now-separate passenger vehicle company (TMPV) opened for trading at ₹400.
  • The market inferred that the difference of ₹260.75 per share represented the “residual value” of the yet-to-be-listed commercial vehicle business.

However, this remains just a market estimate. Professional analysts’ predictions for the actual listing price vary widely, from ₹300 to as high as ₹470 per share. This range isn’t arbitrary; it reflects a strategic valuation process. For instance, some analysts arrive at a valuation of around ₹400 per share by benchmarking the business against peers like Ashok Leyland. Furthermore, the business holds a key future catalyst: the planned integration of Iveco Group NV in fiscal 2027, which will expose it to the global commercial vehicle cycle. For shareholders, the true value of their new asset remains a mystery until it officially begins trading.

——————————————————————————–

Conclusion: A Clearer Path or a More Complicated Journey?

The Tata Motors demerger serves as a powerful case study in corporate strategy. A move intended to create clarity and unlock value has instead revealed surprising complexities. The three key takeaways—the magnified financial dependency on JLR, the strategically convoluted name swap, and the temporary valuation uncertainty for shareholders—paint a picture of a restructuring that is far from simple.

Ultimately, corporate restructuring is rarely as straightforward as it appears on paper. As both new companies now navigate their independent paths, the ultimate question remains: did this complex split truly unlock long-term value, or just rearrange the pieces of an even more intricate puzzle?

Andrej Karpathy: We’re Summoning AI Ghosts, Not Building Animals — And 3 Other Surprising Truths

image by author and grok

It’s nearly impossible to escape the constant stream of AI hype. Daily announcements can make it feel like superintelligence is just around the corner. But for those in the trenches building these systems, the reality is far more complex. Andrej Karpathy, a renowned AI engineer who has led teams at both OpenAI and Tesla, approaches the field with an engineer’s “hard hat on,” offering a perspective that is deeply technical, refreshingly grounded, and often surprising.

In a recent conversation with Dwarkesh Patel, Karpathy broke down the practical realities of building AI today. This article distills four of his most counter-intuitive and impactful ideas—lessons learned from the front lines that cut through the hype and reveal the true state of artificial intelligence.

——————————————————————————–

1. We’re Summoning Ghosts, Not Building Animals

It’s common to hear AI models compared to human or animal brains, but Karpathy argues this analogy is fundamentally flawed. He proposes a different way to think about the intelligence we’re creating, one grounded in engineering reality.

Animals are products of a long, slow evolution that bakes immense capability directly into their hardware. A newborn zebra, for instance, can run and follow its mother minutes after birth—a feat of complexity that isn’t learned, but inherited. Karpathy notes that we simply don’t know how to run that optimization process.

Instead, we have what he calls a “crappy evolution”: pre-training. It’s the messy, imitation-based process we have to use because it’s the only practical version available to us. This results not in evolved creatures, but in what Karpathy calls “ghosts” or “spirits.” They are ethereal, purely digital entities whose entire nature is a compressed, “hazy recollection of the internet documents” they were trained on.

This distinction is crucial. It reframes our expectations and research, moving away from strict biomimicry and toward understanding the unique properties of an intelligence born from imitating a vast, static collection of human data.

In my post, I said we’re not building animals. We’re building ghosts or spirits or whatever people want to call it, because we’re not doing training by evolution. We’re doing training by imitation of humans and the data that they’ve put on the Internet.

——————————————————————————–

2. Today’s Reinforcement Learning Is “Terrible”

Reinforcement Learning (RL) is a key technique for improving AI models, but Karpathy offers a blunt critique of how it currently works, labeling the process “terrible,” “noisy,” and “stupid.”

The standard approach is outcome-based. A model attempts a problem (like a math equation) in hundreds of ways. It then looks at which attempts produced the correct answer and reinforces every single step taken in those successful paths.

Karpathy finds this incredibly inefficient because it incorrectly up-weights every step in a successful chain—including inefficient detours, lucky guesses, and outright mistakes—as long as the final outcome was correct. It rewards luck as much as skill.

A human, by contrast, engages in a “complicated process of review.” We reflect on our strategy, identifying which specific parts were effective and which were flawed, not just the final result. This flaw in AI learning reveals the urgent need for better supervision methods and is a major reason models still struggle with complex, multi-step reasoning.

The way I like to put it is you’re sucking supervision through a straw. You’ve done all this work that could be a minute of rollout, and you’re sucking the bits of supervision of the final reward signal through a straw… It’s just stupid and crazy. A human would never do this.

——————————————————————————–

3. AI Is Surprisingly Bad at Writing Novel Code

Coding is often hailed as AI’s biggest success story, but Karpathy’s recent experience building nanochat—a ChatGPT clone from scratch—reveals a more nuanced reality. He identifies three types of users today: those who reject LLMs, “vibe coders” who ask an agent to write entire features, and “intermediate” users like himself, who rely on autocomplete but remain the architect. From this pragmatic sweet spot, he identified a critical weakness.

LLMs excel at writing boilerplate code and implementing patterns common on the internet. However, they struggle profoundly with code that has “never been written before” or deviates from standard conventions. When Karpathy implemented a custom gradient synchronization, the models repeatedly failed to understand his intent. They kept trying to add defensive “try-catch statements” and turn his focused project into a bloated “production code base,” producing a “total mess.”

This firsthand experience directly informs his skepticism about the “year of agents.” If today’s agents, with their many “cognitive deficits,” produce “slop” when faced with a simple custom implementation, they are nowhere near ready to autonomously innovate on AI research itself. For true novelty, human architects remain essential.

They’re not very good at code that has never been written before, maybe it’s one way to put it, which is what we’re trying to achieve when we’re building these models.

——————————————————————————–

4. For True Intelligence, Perfect Memory Is a Bug, Not a Feature

One of an LLM’s most powerful capabilities is its ability to memorize and regurgitate vast amounts of training data verbatim. In a deeply counter-intuitive turn, Karpathy argues this is not a strength but a fundamental weakness—and it’s a direct consequence of their nature as digital ghosts.

Because their entire existence is based on pattern-matching a static dataset, this powerful memory distracts the model from its more important task: learning the generalizable, abstract patterns within the data. It’s a crutch that prevents the model from being forced to develop deeper reasoning.

This stands in stark contrast to human cognition. Our famously imperfect memory is a feature, not a bug. Because we can’t remember everything perfectly, our brains are forced to compress information, find underlying patterns, and “see the forest for the trees.” This compression is the foundation of true understanding.

The implication is profound. Karpathy suggests future research must find ways to strip away rote knowledge to isolate what he calls the “cognitive core”—the pure algorithms of thought. He speculates this core could be much smaller, potentially only a billion parameters, if it weren’t so burdened by the need to memorize the entire internet.

We’re not actually that good at memorization, which is actually a feature. Because we’re not that good at memorization, we’re forced to find patterns in a more general sense. LLMs in comparison are extremely good at memorization… and it’s probably very distracting to them in a certain sense.

——————————————————————————–

Conclusion: The Long March of the Builder

Andrej Karpathy’s insights reveal a coherent picture from the engineering front lines. We are building digital “ghosts” whose nature—a hazy recollection of the internet—makes them prone to a perfect-yet-distracting memory. We then try to improve them with “terrible” learning methods that reward luck as much as skill. It’s no surprise, then, that these systems falter at true novelty.

His perspective is that of a practical builder: deeply optimistic about what AI can become, but soberly realistic about the immense challenges. Getting from a cool demo to a reliable product is a “march of nines,” where every step of improvement requires monumental effort. Fundamental discoveries about learning, reasoning, and intelligence are yet to be made.

As we continue to build these powerful new forms of intelligence, Karpathy’s insights push us to ask a crucial question: Are we merely trying to build a better tool, or are we trying to create a better thinker?

Reference Link: https://www.youtube.com/watch?v=lXUZvyajciY

He Taught 1 Million People to Code. His Rules for Building with AI Aren’t What You Think.

image by author and chatGPT5 with prompt inspiration from Reference-2

For many developers, collaborating with an AI coding agent is a practice in hope over strategy. They give a single, vague instruction and cross their fingers—a process Ryan Carson calls “vibe coding” or “yoloing.” It’s a fun way to experiment, but as Carson notes, for “engineers that need to build real stuff,” it’s a recipe for frustration.

This isn’t a theoretical problem for Carson. As a serial founder, he’s experienced both ends of the startup spectrum. He built and sold Drop Send as a solo founder, then co-founded Treehouse, a VC-backed behemoth that taught a million people to code. Now, he’s returning to his roots, building a new startup, Untangle, as a solo founder once again—but this time, supercharged by AI. His highly structured, three-file system for agentic development isn’t just a collection of clever prompts; it’s a professional methodology born from years of experience. This article shares the most impactful and counter-intuitive takeaways from his battle-tested approach.

1. Slow Down to Speed Up: The Power of Deliberate Planning

The most striking part of Carson’s process is how much time is spent in structured planning before the AI writes a single line of code. In a live demo, this setup phase took a full 20 minutes. This deliberate planning is a direct refutation of the “prompt now, fix later” impulse that dominates amateur AI usage. Instead of a single vague request, the system first generates a detailed Product Requirements Document (PRD), then breaks that down into high-level “parent tasks,” and finally generates granular, atomic “subtasks” for each.

This methodical planning acts as a critical guardrail. It forces the developer to clarify their own thinking and provides the agent with a detailed, step-by-step roadmap. By investing time upfront, you prevent the AI from veering off-course, ultimately saving hours of debugging and rework. This isn’t a hack; it’s the discipline of an architect versus the impatience of a script-kiddie. It’s what professional, agent-driven software development actually looks like.

we’ve been talking for like 20 minutes right and like now it’s finally starting to code… this is actually the way real software development happens with agents.

2. Treat Your AI Like a New Hire, Not a Magician

Carson’s core philosophy is to treat the AI agent like a very smart, but context-free, new engineer who just showed up on your doorstep. This simple analogy is a powerful forcing function that combats a developer’s natural tendency toward laziness when prompting. As interviewer Peter Yang admitted, “I become so lazy… I just hey go build this… this is forcing me to actually provide some more details.”

Carson’s system operationalizes this principle with its first file, create_prd.md. The prompt explicitly instructs the AI agent to begin by asking clarifying questions about the project’s goals, target users, and the specific problem being solved. This step is crucial for two reasons: it forces the developer to articulate their idea with precision, and it equips the AI with the essential context needed to generate a relevant and effective plan.

imagine that you had a very smart engineer show up on your doorstep they have no context no background you wouldn’t just tell you know a random new employee “Make me a game that’s super fun to play and then expect them to succeed.”

3. Require Human Approval Before Every Major Step

A common fantasy is that AI agents will build entire applications autonomously while we sleep. Carson’s system is a practical rejection of this idea, building in explicit checkpoints that keep the human developer firmly in the driver’s seat. This “human-in-the-loop” approach is essential for guiding the agent and ensuring the project doesn’t veer off course.

The system enforces this in two key ways. First, the generate_tasks.md prompt instructs the AI to create a short list of high-level “parent tasks” and wait for user confirmation before generating detailed subtasks. Second, the process_task_list.mdprompt forces the agent to ask for permission (a “yes” or “y”) before executing each individual subtask. However, this isn’t rigid dogma. As AI models improve, the system adapts. Carson notes that the need for constant supervision is already lessening with more advanced models.

i wouldn’t want the AI to run off and create 30 tasks i would want it to create a high level you know give me five tasks and then I want to approve those.

As he later reflected on the tight control loop:

i think you know when I shipped this uh we were on sonnet 37 um and I think with sonnet 4 you really don’t need to handhold it you know quite as tightly

4. Make Your Test Suite the AI’s Real Co-Pilot

In a traditional workflow, Test-Driven Development (TDD) is a best practice. In an agentic workflow, it becomes the non-negotiable feedback mechanism that separates success from failure. Without tests, a developer is stuck in a frustrating, subjective loop of “vibe coding,” telling the agent "Hey this is not working go fix this... it's not working it's still not working."

In Carson’s demo, when he noticed the initial plan lacked testing, he instructed the agent to add a Jest test after each functional change. This highlights the developer’s crucial role in refining the AI’s strategy. Tests provide the agent with a clear, automated, and objective signal of success or failure. This loop replaces subjective frustration with objective signals, forming the foundation of any reliable, professional AI development process.

the reason why you have to really care about test driven development now is because it’s the loop that the agent needs to actually know if it’s doing things right.

5. Use Different Models for Different Kinds of Thinking

One of the most sophisticated techniques in Carson’s workflow is leveraging a portfolio of AI models for their unique strengths. His agent of choice, AMP, has an “Oracle” feature that demonstrates this perfectly. For most implementation tasks, the agent uses a faster, more cost-effective model like Claude 3 Sonnet. For summarization, it might use Gemini Flash. But when a high-level strategic review is needed, Carson can invoke the Oracle.

This action makes a tool call to a more powerful, slower, and more expensive reasoning model—Claude 3 Opus—not to perform an action, but to review a plan. This is a subtle but critical distinction. He isn’t asking the powerful model to code; he’s asking it to think. As Carson puts it, “what you’re doing is saying I just want someone to to double check what I’m doing.” This is analogous to asking a senior architect for a second opinion on a blueprint before letting a junior engineer start building.

Conclusion: The Operating System for the Solo Founder

Building production-grade software with AI requires a mental shift from coder to architect. But Carson’s system reveals a deeper truth: this disciplined, architectural mindset is not just a better way to code—it’s the operating system for a new kind of entrepreneur.

Carson is building Untangle to solve a painful, real-world problem for a niche audience, a business he calls a “pain pill, not a vitamin.” This is the classic solo founder playbook, but now enabled by an unprecedented level of leverage. His structured process is what makes it possible for one person to build, ship, and manage a complex application that once would have required a team. It transforms the developer from someone who merely writes code into someone who designs a system of collaboration between human insight and machine execution. This isn’t just about building apps anymore; it’s about building a one-person engine of value.

References

  1. Full Tutorial: A Proven 3-File System to Vibe Code Production Apps | Ryan Carson
  2. https://x.com/LinusEkenstam/status/1977139213456769477