Category Archives: 1-By Laksh

All these articles are from Laksh’s desk

The Tata Motors Demerger: 3 Surprising Twists You Didn’t See Coming

image by author and Perplexity.ai

Introduction: The Split That Sparked More Questions Than Answers

Tata Motors, a household name in the automotive industry, recently split its operations into two separate, publicly traded companies: one for passenger vehicles and another for commercial vehicles. On the surface, this is a classic corporate move, a strategic demerger designed to “unlock value,” create more focused businesses, and enhance shareholder returns.

But what happens when a move designed for clarity creates unexpected risks and confusing questions? The reality of the Tata Motors demerger is far more complex and surprising than the official announcements suggest. This article explores three crucial insights that reveal the hidden intricacies of this major corporate restructuring, offering a deeper understanding for anyone interested in business strategy or the future of Tata Motors.

——————————————————————————–

1. Splitting Up Made One Half Weaker: The Jaguar Land Rover Paradox

A demerger is a controlled, deliberate act of corporate engineering, designed to create two strong, independent entities. The expectation was that both new companies would emerge with a clear focus and a solid financial footing. In fact, rating agency S&P Global initially expected the demerger to be “neutral” to the company’s credit rating.

The reality proved to be a sharp, counter-intuitive reversal. Immediately following the split, the new passenger vehicle company, Tata Motors Passenger Vehicles (TMPV), received a ‘negative’ outlook from S&P. The core reason is a matter of strategic concentration: with the stable commercial vehicle business spun off, TMPV’s earnings are now overwhelmingly dependent on its subsidiary, Jaguar Land Rover (JLR), which accounts for over 80% of its income.

This heightened dependency was dangerously exposed when JLR, an external entity, was hit by a chaotic event: a severe cyberattack that halted its global production for over a month. The financial fallout, as analyzed by S&P, directly weakens the new TMPV entity:

  • Projected revenue decline of 15%-18% for JLR in fiscal 2026.
  • Projected EBITDA margins are expected to fall to 3%-5% in fiscal 2026, a sharp drop from 7.6%.
  • Weakened credit metrics for TMPV, with its net debt to EBITDA ratio projected to rise to 2.5x-3.0x.
  • A dramatic fall in cash flow, with the Funds From Operations (FFO) to debt ratio projected to weaken to 15%-25%, down from over 100% in fiscal 2025.

The strategic dissonance is stark: a controlled corporate action designed to streamline operations has instead magnified the financial risk posed by a chaotic external event, making the new passenger vehicle company more vulnerable to disruptions at its luxury subsidiary.

——————————————————————————–

2. The Great Name Swap: Why ‘Tata Motors’ Isn’t What You Think It Is

The process of renaming the companies involved in the demerger appears, at first glance, to be needlessly convoluted. Consider these steps:

  1. The original, publicly traded company, Tata Motors Limited (TML), has been officially renamed Tata Motors Passenger Vehicles Limited (TMPV).
  2. The new, spun-off commercial vehicle business is named TML Commercial Vehicles Limited (TMLCV).
  3. The final twist: the plan is for TMLCV to eventually be renamed back to the original Tata Motors Limited.

This wasn’t an error, but a masterclass in corporate realpolitik. If the goal was to keep the iconic ‘Tata Motors’ name for the commercial vehicle business, why not simply spin off the passenger vehicle unit?

The answer lies in the hidden complexities of global M&A. Demerging the passenger vehicle business—which includes the UK-based Jaguar Land Rover—would have been far more complex due to international regulations. By renaming the existing listed entity and demerging the purely domestic commercial vehicle business, the company chose the path of least legal and logistical resistance. This calculated move reveals that for a global giant, the internal plumbing of a deal often matters more than the public-facing label, prioritizing operational simplicity over branding clarity.

——————————————————————————–

3. Your New Shares Arrived… With a Mystery Price Tag

For existing Tata Motors shareholders, the demerger brought a tangible but perplexing change to their portfolios. Shareholders on the record date of October 14, 2025, were deemed eligible, and the new TMLCV shares were credited to their demat accounts on October 16.

The problem was one of “phantom value.” While the new TMLCV shares appeared in shareholder accounts, they were listed under an “inactive stocks” category with no price assigned. This created a period of uncertainty where investors held an asset of unknown worth. In the absence of an official price, the market itself derived an implied value for the new commercial vehicle shares. Here’s how:

  • The pre-demerger closing price of the original Tata Motors was ₹660.75.
  • After the demerger, the now-separate passenger vehicle company (TMPV) opened for trading at ₹400.
  • The market inferred that the difference of ₹260.75 per share represented the “residual value” of the yet-to-be-listed commercial vehicle business.

However, this remains just a market estimate. Professional analysts’ predictions for the actual listing price vary widely, from ₹300 to as high as ₹470 per share. This range isn’t arbitrary; it reflects a strategic valuation process. For instance, some analysts arrive at a valuation of around ₹400 per share by benchmarking the business against peers like Ashok Leyland. Furthermore, the business holds a key future catalyst: the planned integration of Iveco Group NV in fiscal 2027, which will expose it to the global commercial vehicle cycle. For shareholders, the true value of their new asset remains a mystery until it officially begins trading.

——————————————————————————–

Conclusion: A Clearer Path or a More Complicated Journey?

The Tata Motors demerger serves as a powerful case study in corporate strategy. A move intended to create clarity and unlock value has instead revealed surprising complexities. The three key takeaways—the magnified financial dependency on JLR, the strategically convoluted name swap, and the temporary valuation uncertainty for shareholders—paint a picture of a restructuring that is far from simple.

Ultimately, corporate restructuring is rarely as straightforward as it appears on paper. As both new companies now navigate their independent paths, the ultimate question remains: did this complex split truly unlock long-term value, or just rearrange the pieces of an even more intricate puzzle?

Andrej Karpathy: We’re Summoning AI Ghosts, Not Building Animals — And 3 Other Surprising Truths

image by author and grok

It’s nearly impossible to escape the constant stream of AI hype. Daily announcements can make it feel like superintelligence is just around the corner. But for those in the trenches building these systems, the reality is far more complex. Andrej Karpathy, a renowned AI engineer who has led teams at both OpenAI and Tesla, approaches the field with an engineer’s “hard hat on,” offering a perspective that is deeply technical, refreshingly grounded, and often surprising.

In a recent conversation with Dwarkesh Patel, Karpathy broke down the practical realities of building AI today. This article distills four of his most counter-intuitive and impactful ideas—lessons learned from the front lines that cut through the hype and reveal the true state of artificial intelligence.

——————————————————————————–

1. We’re Summoning Ghosts, Not Building Animals

It’s common to hear AI models compared to human or animal brains, but Karpathy argues this analogy is fundamentally flawed. He proposes a different way to think about the intelligence we’re creating, one grounded in engineering reality.

Animals are products of a long, slow evolution that bakes immense capability directly into their hardware. A newborn zebra, for instance, can run and follow its mother minutes after birth—a feat of complexity that isn’t learned, but inherited. Karpathy notes that we simply don’t know how to run that optimization process.

Instead, we have what he calls a “crappy evolution”: pre-training. It’s the messy, imitation-based process we have to use because it’s the only practical version available to us. This results not in evolved creatures, but in what Karpathy calls “ghosts” or “spirits.” They are ethereal, purely digital entities whose entire nature is a compressed, “hazy recollection of the internet documents” they were trained on.

This distinction is crucial. It reframes our expectations and research, moving away from strict biomimicry and toward understanding the unique properties of an intelligence born from imitating a vast, static collection of human data.

In my post, I said we’re not building animals. We’re building ghosts or spirits or whatever people want to call it, because we’re not doing training by evolution. We’re doing training by imitation of humans and the data that they’ve put on the Internet.

——————————————————————————–

2. Today’s Reinforcement Learning Is “Terrible”

Reinforcement Learning (RL) is a key technique for improving AI models, but Karpathy offers a blunt critique of how it currently works, labeling the process “terrible,” “noisy,” and “stupid.”

The standard approach is outcome-based. A model attempts a problem (like a math equation) in hundreds of ways. It then looks at which attempts produced the correct answer and reinforces every single step taken in those successful paths.

Karpathy finds this incredibly inefficient because it incorrectly up-weights every step in a successful chain—including inefficient detours, lucky guesses, and outright mistakes—as long as the final outcome was correct. It rewards luck as much as skill.

A human, by contrast, engages in a “complicated process of review.” We reflect on our strategy, identifying which specific parts were effective and which were flawed, not just the final result. This flaw in AI learning reveals the urgent need for better supervision methods and is a major reason models still struggle with complex, multi-step reasoning.

The way I like to put it is you’re sucking supervision through a straw. You’ve done all this work that could be a minute of rollout, and you’re sucking the bits of supervision of the final reward signal through a straw… It’s just stupid and crazy. A human would never do this.

——————————————————————————–

3. AI Is Surprisingly Bad at Writing Novel Code

Coding is often hailed as AI’s biggest success story, but Karpathy’s recent experience building nanochat—a ChatGPT clone from scratch—reveals a more nuanced reality. He identifies three types of users today: those who reject LLMs, “vibe coders” who ask an agent to write entire features, and “intermediate” users like himself, who rely on autocomplete but remain the architect. From this pragmatic sweet spot, he identified a critical weakness.

LLMs excel at writing boilerplate code and implementing patterns common on the internet. However, they struggle profoundly with code that has “never been written before” or deviates from standard conventions. When Karpathy implemented a custom gradient synchronization, the models repeatedly failed to understand his intent. They kept trying to add defensive “try-catch statements” and turn his focused project into a bloated “production code base,” producing a “total mess.”

This firsthand experience directly informs his skepticism about the “year of agents.” If today’s agents, with their many “cognitive deficits,” produce “slop” when faced with a simple custom implementation, they are nowhere near ready to autonomously innovate on AI research itself. For true novelty, human architects remain essential.

They’re not very good at code that has never been written before, maybe it’s one way to put it, which is what we’re trying to achieve when we’re building these models.

——————————————————————————–

4. For True Intelligence, Perfect Memory Is a Bug, Not a Feature

One of an LLM’s most powerful capabilities is its ability to memorize and regurgitate vast amounts of training data verbatim. In a deeply counter-intuitive turn, Karpathy argues this is not a strength but a fundamental weakness—and it’s a direct consequence of their nature as digital ghosts.

Because their entire existence is based on pattern-matching a static dataset, this powerful memory distracts the model from its more important task: learning the generalizable, abstract patterns within the data. It’s a crutch that prevents the model from being forced to develop deeper reasoning.

This stands in stark contrast to human cognition. Our famously imperfect memory is a feature, not a bug. Because we can’t remember everything perfectly, our brains are forced to compress information, find underlying patterns, and “see the forest for the trees.” This compression is the foundation of true understanding.

The implication is profound. Karpathy suggests future research must find ways to strip away rote knowledge to isolate what he calls the “cognitive core”—the pure algorithms of thought. He speculates this core could be much smaller, potentially only a billion parameters, if it weren’t so burdened by the need to memorize the entire internet.

We’re not actually that good at memorization, which is actually a feature. Because we’re not that good at memorization, we’re forced to find patterns in a more general sense. LLMs in comparison are extremely good at memorization… and it’s probably very distracting to them in a certain sense.

——————————————————————————–

Conclusion: The Long March of the Builder

Andrej Karpathy’s insights reveal a coherent picture from the engineering front lines. We are building digital “ghosts” whose nature—a hazy recollection of the internet—makes them prone to a perfect-yet-distracting memory. We then try to improve them with “terrible” learning methods that reward luck as much as skill. It’s no surprise, then, that these systems falter at true novelty.

His perspective is that of a practical builder: deeply optimistic about what AI can become, but soberly realistic about the immense challenges. Getting from a cool demo to a reliable product is a “march of nines,” where every step of improvement requires monumental effort. Fundamental discoveries about learning, reasoning, and intelligence are yet to be made.

As we continue to build these powerful new forms of intelligence, Karpathy’s insights push us to ask a crucial question: Are we merely trying to build a better tool, or are we trying to create a better thinker?

Reference Link: https://www.youtube.com/watch?v=lXUZvyajciY

Originality Across Time: From Kalidasa to the Age of Large Language Models

image by author and Nano Banana via Google AI Studio

Originality—what does it mean to create something truly new? This question has echoed through the corridors of human thought for millennia, evolving in meaning with each cultural epoch. From the lyrical genius of ancient Indian poet Kalidasa to the algorithmic artistry of today’s Large Language Models (LLMs), our conception of originality has undergone profound transformation. In an age where AI chatbots co-author poems, draft essays, and even compose music, we find ourselves at a crossroads: if a machine helped create it, can it still be considered original?

The Classical Ideal: Originality as Divine Inspiration

In the 4th–5th century CE, Kalidasa, often hailed as the greatest poet and playwright in classical Sanskrit literature, composed masterpieces such as Abhijnanasakuntalam, Meghaduta, and Kumarasambhava. To his contemporaries, Kalidasa’s brilliance was not merely technical—it was seen as pratibha, a Sanskrit term denoting intuitive genius or creative insight. This concept did not emphasize novelty in the modern sense, but rather the poet’s ability to draw from tradition and yet express it with such depth and grace that it felt new.

Originality in Kalidasa’s time was not about inventing ex nihilo (from nothing), but about reimagining and refining the eternal. His works were deeply rooted in existing mythologies and poetic conventions, yet his voice was unmistakably unique. His originality lay not in breaking from tradition, but in transcending it through emotional depth, linguistic beauty, and imaginative power.

Here, originality was a synthesis: the poet as a vessel through which divine or cultural truths were re-expressed in a personal, inspired way. The idea of “plagiarism” as we know it today did not exist; instead, excellence was measured by how well one could internalize and re-voice the wisdom of the past.

The Enlightenment Shift: Originality as Individual Genius

Fast forward to the 18th and 19th centuries, and the Romantic movement redefined originality. Thinkers like Wordsworth, Coleridge, and later, Emerson, elevated the individual artist as a solitary genius creating from inner vision. Originality now meant breaking from tradition, expressing the unique self, and producing something unprecedented.

This era birthed the myth of the “solitary creator”—the poet scribbling by candlelight, the painter tormented by inspiration. Originality became synonymous with novelty, authenticity, and ownership. The copyright laws that emerged in this period reflect this shift: creativity was now property, and originality was its legal and moral foundation.

But even then, originality was never pure invention. T.S. Eliot, in his seminal essay “Tradition and the Individual Talent” (1919), argued that true originality comes not from ignoring the past, but from engaging deeply with it. The poet, he said, must be aware of “the whole of the literature of Europe,” and originality arises from the dynamic tension between the old and the new.

The LLM Age: Originality in the Era of Artificial Co-Creation

Today, we stand at the threshold of a new paradigm—one where creativity is no longer solely the domain of human minds. With the advent of Large Language Models like GPT, Claude, and Llama, machines can generate poetry, stories, code, and philosophical essays that are indistinguishable from human work—at least on the surface.

This raises urgent questions:

  • If an AI helps me write a poem, is it mine?
  • If the AI trained on millions of texts, including Kalidasa’s, is its output derivative?
  • Can a machine be original?

The answer lies not in binary thinking, but in redefining what originality means in a collaborative, data-saturated world.

First, it’s important to recognize that LLMs do not “create” in the human sense. They do not have consciousness, intention, or emotion. Instead, they statistically recombine patterns from their training data. Every sentence an AI generates is a mosaic of human expressions, reassembled through mathematical inference.

But this does not mean the output lacks originality. Consider a poet using an AI as a collaborator: they might prompt the model with a line from Meghaduta, ask for a modern reinterpretation, and then refine the AI’s response into a new poem. The final work is not the AI’s alone, nor is it purely the human’s. It is a hybrid creation—a dialogue across time and intelligence.

In this light, originality is no longer about purity of source, but about the intentionality of synthesis. Just as Kalidasa drew from the Mahabharata to create Shakuntala, today’s creators draw from a vast digital corpus, mediated by AI, to produce something new. The act of curation, editing, and personal expression becomes the hallmark of originality.

Rethinking Authorship: From Solitary Genius to Creative Partnership

We must move beyond the outdated dichotomy of “human original” versus “machine derivative.” The LLM age calls for a more nuanced understanding—one where originality is seen as a process, not a product.

Originality today may reside in:

  • The prompt—the creative spark that initiates the AI’s response.
  • The selection and refinement—the human judgment that shapes raw output into meaningful work.
  • The context—the cultural, emotional, or intellectual framework that gives the work significance.

In this view, AI does not replace the artist; it becomes a new kind of muse—one that amplifies human creativity rather than diminishing it.

Conclusion: Originality Reborn

From Kalidasa’s inspired re-tellings to the AI-assisted art of the 21st century, originality has never been about creating from nothing. It has always been about transformation—about taking the known and making it feel new, personal, and true.

In the age of LLMs, we are not losing originality. We are expanding it. The tools have changed, but the human desire to express, to connect, and to transcend remains the same.

So, if an AI helped create it—does that make it unoriginal? Not necessarily. What matters is not the tool, but the vision behind it. Originality, in the end, is not about where the words come from, but what they mean—and who gives them meaning.

As Kalidasa might say, if the lotus blooms from the mud, does its beauty depend on the soil—or the sun?