Every analyst in your competitive set reads the same sell-side notes, downloads the same industry reports, and pays for access to the same data aggregators. If your research process ends at secondary sources, you do not have an information advantage. You have a shared starting point dressed up as analysis.
When a portfolio manager at a mid-market fund asks an analyst to build conviction on a position, the analyst typically does the following: pulls the sell-side coverage, reads the investor day transcript, downloads the most recent earnings model, checks a data aggregator like Bloomberg or FactSet, and synthesises the consensus view.
The result is a document that accurately represents what the market already knows. It does not tell you anything the market does not already know. Every institutional investor with a Bloomberg terminal has access to the same sell-side notes. Every analyst who attended the same investor day heard the same prepared remarks. The data aggregator has the same numbers your competitors have.
Secondary research is, by definition, public. And public information, once incorporated into prices, generates no alpha. The question every research team needs to ask is not whether they have done enough research. It is whether the research they have done is the same research everyone else has done.
The answer, for most teams relying exclusively on secondary sources, is yes.
Analyst reports, earnings models, and industry publications are inherently backward-looking. They are produced after events have occurred, processed through a layer of interpretation, and published to a broad audience. By the time a sell-side note reaches your inbox, its thesis has already been read by every other buy-side analyst on the distribution list.
Primary research operates in the opposite direction. A conversation with a former VP of Sales at a target company, conducted before the next earnings cycle, surfaces information about pipeline dynamics, pricing pressure, and competitive positioning that has not yet been incorporated into any published analysis. The information is directional, not historical. It tells you where things are going, not where they have been.
“The institutional edge has always come from knowing something the consensus does not know yet. Secondary research, by definition, cannot give you that.”
The mechanics of secondary research create a structural problem for differentiated thinking. When twelve analysts at competing funds are reading the same Goldman Sachs sector note, attending the same company investor day, and pulling data from the same Bloomberg terminal, the only possible output is a consensus view. The inputs are identical. The outputs will converge.
This is not a failure of individual analysts. It is a failure of the research infrastructure. The analysts are good. The methodology is the problem.
Primary research breaks the consensus trap. The practitioner who worked inside the company being studied has a vantage point that no external analyst can replicate from public data. Their understanding of the sales cycle, the operational constraints, the management team dynamics, and the competitive landscape comes from direct experience, not from reading the same filings every other analyst has read.
Browse MNPI-screened expert call transcripts from C-suite and VP-level practitioners.
Every public company has two versions of its story. The first is the investor relations version: carefully worded, legally reviewed, optimised to manage expectations and maintain access to capital markets. This is the version that appears in earnings transcripts, investor day presentations, and management interviews.
The second version lives in the operational layer: in the conversations that happen between senior executives and their teams, in the feedback that customers give to sales people but never to investor relations, in the competitive intelligence that former employees carry with them when they leave. This version is never published. It does not appear in any secondary source.
Primary research through expert calls accesses the second version. A former CFO speaking candidly about the capital allocation decisions they disagreed with internally is providing information that no analyst report can contain. A former VP of Product describing the engineering constraints that delayed a key feature launch is giving a researcher access to operational reality that management communications are specifically designed not to reveal.
“IR tells you what management wants you to know. A former operator tells you what they wished they had known before taking the job.”
Consider the timeline of a sell-side research note on a company facing a major product recall. The analyst is notified of the recall when the press release is issued. They spend 48 hours building a revised model, drafting the note, getting it through compliance review, and distributing it. The note arrives in your inbox 72 hours after the event that prompted it.
In those 72 hours, the primary research community has already spoken to former quality control executives, supply chain managers, and customer success leaders who understand the operational severity of the recall far better than any financial analyst working from public disclosures. The analyst who has access to that primary intelligence made their position adjustment in the first 24 hours. The analyst who waited for the sell-side note made theirs after the bulk of the move had already happened.
Primary research scales to the speed of the question. You do not wait for someone else to process the event and publish their interpretation. You go directly to people who understand the operational reality and ask them directly.
A comprehensive secondary research process on a single company typically involves reading the most recent annual report, the last four earnings transcripts, the available sell-side coverage, and two or three relevant industry reports. A diligent analyst can complete this process in a long day. They will emerge with a broad understanding of the company that is, in its essential outlines, identical to the broad understanding of every other diligent analyst who has done the same reading.
Depth is different from coverage. Depth is understanding why the gross margins in the enterprise division declined 120 basis points in a quarter that management described as operationally strong. It is understanding which of the three announced strategic initiatives the management team is actually committed to and which is window dressing. It is understanding what the two largest enterprise customers actually think about the renewal cycle, as opposed to what the VP of Customer Success says about it in the earnings call.
Large language models can now summarise annual reports, synthesise sell-side coverage, and extract key points from earnings transcripts faster than any human analyst. Within the next two years, the secondary research process that currently takes an analyst two days will be completed in two minutes by an AI tool that every competing fund also has access to.
The commoditisation of secondary research is not a future risk. It is the current direction of travel. The analytical tasks that AI performs well are precisely the tasks that constitute the bulk of most secondary research workflows: reading, summarising, extracting, and synthesising publicly available text.
Primary research is structurally different. It requires building relationships with practitioners, identifying the right expert for a specific question, moderating a conversation that surfaces the specific insight rather than the prepared answer, and interpreting the significance of what is said. These tasks cannot be automated because they depend on human judgment, human credibility, and human context.
The research teams that will have durable information advantages in an AI-saturated market are the ones building primary research infrastructure now rather than optimising for faster execution of a secondary research process that is being automated out of existence.
The argument here is not that secondary research has no value. It is that secondary research, as a standalone research methodology, cannot generate durable information advantage in a market where every institutional participant has access to the same Bloomberg terminal, the same sell-side distribution lists, and the same AI tools for accelerating synthesis.
Secondary research is the table stakes. It is the minimum viable preparation for any investment or advisory process. It tells you what the market knows. Primary research tells you what the market does not know yet. The gap between those two things is where returns are made, where advisory recommendations are differentiated, and where research teams build the kind of reputation that compounds over time.
“If your research process could be replicated by any other team with a Bloomberg subscription, it is not a research edge. It is a research starting point. The question is what you do after that.”
The mechanism for building that edge is structured primary research: conversations with practitioners who hold the kind of operational knowledge that does not appear in any published source, conducted with proper compliance infrastructure, documented in a format that can be retained, cited, and returned to.
Expert call transcripts are the most efficient form of structured primary research available to institutional and independent research teams today. Here is why they work.