This section analyzes PEAR's effectiveness by calculating consensus across six recognized explainer agreement measures, including as pairwise rank agreement, rank correlation, and feature agreement. PEAR training not only increases agreement between the explainers utilized in the loss (Grad and IntGrad), but it also makes significant progress in generalizing to explainers that are not visible, such LIME and SHAP.This section analyzes PEAR's effectiveness by calculating consensus across six recognized explainer agreement measures, including as pairwise rank agreement, rank correlation, and feature agreement. PEAR training not only increases agreement between the explainers utilized in the loss (Grad and IntGrad), but it also makes significant progress in generalizing to explainers that are not visible, such LIME and SHAP.

The Trade-Off Between Accuracy and Agreement in AI Models

Abstract and 1. Introduction

1.1 Post Hoc Explanation

1.2 The Disagreement Problem

1.3 Encouraging Explanation Consensus

  1. Related Work

  2. Pear: Post HOC Explainer Agreement Regularizer

  3. The Efficacy of Consensus Training

    4.1 Agreement Metrics

    4.2 Improving Consensus Metrics

    [4.3 Consistency At What Cost?]()

    4.4 Are the Explanations Still Valuable?

    4.5 Consensus and Linearity

    4.6 Two Loss Terms

  4. Discussion

    5.1 Future Work

    5.2 Conclusion, Acknowledgements, and References

Appendix

4.1 Agreement Metrics

In their work on the disagreement problem, Krishna et al. [15] introduce six metrics to measure the amount of agreement between post hoc feature attributions. Let [šø1(š‘„)]š‘– , [šø2(š‘„)]š‘– be the attribution scores from explainers for the š‘–-th feature of an input š‘„. A feature’s rank is its index when features are ordered by the absolute value of their attribution scores. A feature is considered in the top-š‘˜ most important features if its rank is in the top-š‘˜. For example, if the importance scores for a point š‘„ = [š‘„1, š‘„2, š‘„3, š‘„4], output by one explainer are šø1(š‘„) = [0.1, āˆ’0.9, 0.3, āˆ’0.2], then the most important feature is š‘„2 and its rank is 1 (for this explainer).

\ Feature Agreement counts the number of features š‘„š‘– such that [šø1(š‘„)]š‘– and [šø2(š‘„)]š‘– are both in the top-š‘˜. Rank Agreement counts the number of features in the top-š‘˜ with the same rank in šø1(š‘„) and šø2(š‘„). Sign Agreement counts the number of features in the top-š‘˜ such that [šø1(š‘„)]š‘– and [šø2(š‘„)]š‘– have the same sign. Signed Rank Agreement counts the number of features in the top-š‘˜ such that [šø1(š‘„)]š‘– and [šø2(š‘„)]š‘– agree on both sign and rank. Rank Correlation is the correlation between šø1(š‘„) and šø2(š‘„) (on all features, not just in the top-š‘˜), and is often referred to as the Spearman correlation coefficient. Lastly, Pairwise Rank Agreement counts the number of pairs of features (š‘„š‘– , š‘„š‘—) such that šø1 and šø2 agree on whether š‘„š‘– or š‘„š‘— is more important. All of these metrics are formalized as fractions and thus range from 0 to 1, except Rank Correlation, which is a correlation measurement and ranges from āˆ’1 to +1. Their formal definitions are provided in Appendix A.3.

\ In the results that follow, we use all of the metrics defined above and reference which one is used where appropriate. When we evaluate a metric to measure the agreement between each pair of explainers, we average the metric over the test data to measure agreement. Both agreement and accuracy measurements are averaged over several trials (see Appendices A.6 and A.5 for error bars).

4.2 Improving Consensus Metrics

The intention of our consensus loss term is to improve agreement metrics. While the objective function explicitly includes only two explainers, we show generalization to unseen explainers as well as to the unseen test data. For example, we train for agreement between Grad and IntGrad and observe an increase in consensus between LIME and SHAP.

\ To evaluate the improvement in agreement metrics when using our consensus loss term, we compute explanations from each explainer on models trained naturally and on models trained with our consensus loss parameter using šœ† = 0.5.

\ In Figure 4, using a visualization tool developed by Krishna et al. [15], we show how we evaluate the change in an agreement metric (pairwise rank agreement) between all pairs of explainers on the California Housing data.

\ Hypothesis: We can increase consensus by deliberately training for post hoc explainer agreement.

\ Through our experiments, we observe improved agreement metrics on unseen data and on unseen pairs of explainers. In Figure 4 we show a representative example where Pairwise Rank Agreement between Grad and IntGrad improve from 87% to 96% on unseen data. Moreover, we can look at two other explainers and see that agreement between SmoothGrad and LIME improves from 56% to 79%. This shows both generalization to unseen data and to explainers other than those explicitly used in the loss term. In Appendix A.5, we see more saturated disagreement matrices across all of our datasets and all six agreement metrics.

4.3 Consistency At What Cost?

While training for consensus works to boost agreement, a question remains: How accurate are these models?

\ To address this question, we first point out that there is a tradeoff here, i.e., more consensus comes at the cost of accuracy. With this in mind we posit that there is a Pareto frontier on the accuracy-agreement axes. While we cannot assert that our models are on the Pareto frontier, we plot trade-off curves which represent the trajectory through accuracy-agreement space that is carved out by changing šœ†.

\ Hypothesis: We can increase consensus with an acceptable drop in accuracy

\ While this hypothesis is phrased as a subjective claim, in reality we define acceptable performance as better than a linear model as explained at the beginning of Section 4. We see across all three datasets that increasing the consensus loss weight šœ† leads to higher pairwise rank agreement between LIME and SHAP. Moreover, even with high values of šœ†, the accuracy stays well above linear models indicating that the loss in performance is acceptable. Therefore this experiment supports the hypothesis.

\ The results plotted in Figure 5 demonstrate that a practitioner concerned with agreement can tune šœ† to meet their needs of accuracy and agreement. This figure serves in part to illuminate why our

\ Figure 4: When models are trained naturally, we see disagreement among post hoc explainers (left). However, when trained with our loss function, we see a boost in agreement with only a small cost in accuracy (right). This can be observed visually by the increase in saturation or in more detail by comparing the numbers in corresponding squares.

\ Figure 5: The trade-off curves of consensus and accuracy. Increasing the consensus comes with a drop in accuracy and the trade-off is such that we can achieve more agreement and still outperform linear baselines. Moreover, as we vary the šœ† value, we move along the trade-off curve. In all three plots we measure agreement with the pairwise rank agreement metric and we show that increased consensus comes with a drop in accuracy, but all of our models are still more accurate than the linear baseline, indicated by the vertical dashed line (the shaded region shows ± one standard error).

\ hyperparameter choice is sensibleā€”šœ† gives us control to slide along the trade-off curve, making post hoc explanation disagreement more of a controllable model parameter so that practitioners have more flexibility to make context-specific model design decisions.

\

:::info Authors:

(1) Avi Schwarzschild, University of Maryland, College Park, Maryland, USA and Work completed while working at Arthur (avi1umd.edu);

(2) Max Cembalest, Arthur, New York City, New York, USA;

(3) Karthik Rao, Arthur, New York City, New York, USA;

(4) Keegan Hines, Arthur, New York City, New York, USA;

(5) John Dickerson†, Arthur, New York City, New York, USA (john@arthur.ai).

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\

Market Opportunity
SIX Logo
SIX Price(SIX)
$0.01111
$0.01111$0.01111
-4.14%
USD
SIX (SIX) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

21Shares Launches JitoSOL Staking ETP on Euronext for European Investors

21Shares Launches JitoSOL Staking ETP on Euronext for European Investors

21Shares launches JitoSOL staking ETP on Euronext, offering European investors regulated access to Solana staking rewards with additional yield opportunities.Read
Share
Coinstats2026/01/30 12:53
Digital Asset Infrastructure Firm Talos Raises $45M, Valuation Hits $1.5 Billion

Digital Asset Infrastructure Firm Talos Raises $45M, Valuation Hits $1.5 Billion

Robinhood, Sony and trading firms back Series B extension as institutional crypto trading platform expands into traditional asset tokenization
Share
Blockhead2026/01/30 13:30
Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Turn lengthy earnings call transcripts into one-page insights using the Financial Modeling PrepĀ APIPhoto by BichĀ Tran Earnings calls are packed with insights. They tell you how a company performed, what management expects in the future, and what analysts are worried about. The challenge is that these transcripts often stretch across dozens of pages, making it tough to separate the key takeaways from theĀ noise. With the right tools, you don’t need to spend hours reading every line. By combining the Financial Modeling Prep (FMP) API with Groq’s lightning-fast LLMs, you can transform any earnings call into a concise summary in seconds. The FMP API provides reliable access to complete transcripts, while Groq handles the heavy lifting of distilling them into clear, actionable highlights. In this article, we’ll build a Python workflow that brings these two together. You’ll see how to fetch transcripts for any stock, prepare the text, and instantly generate a one-page summary. Whether you’re tracking Apple, NVIDIA, or your favorite growth stock, the process works the sameā€Šā€”ā€Šfast, accurate, and ready whenever youĀ are. Fetching Earnings Transcripts with FMPĀ API The first step is to pull the raw transcript data. FMP makes this simple with dedicated endpoints for earnings calls. If you want the latest transcripts across the market, you can use the stable endpoint /stable/earning-call-transcript-latest. For a specific stock, the v3 endpoint lets you request transcripts by symbol, quarter, and year using theĀ pattern: https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={q}&year={y}&apikey=YOUR_API_KEY here’s how you can fetch NVIDIA’s transcript for a givenĀ quarter: import requestsAPI_KEY = "your_api_key"symbol = "NVDA"quarter = 2year = 2024url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={API_KEY}"response = requests.get(url)data = response.json()# Inspect the keysprint(data.keys())# Access transcript contentif "content" in data[0]: transcript_text = data[0]["content"] print(transcript_text[:500]) # preview first 500 characters The response typically includes details like the company symbol, quarter, year, and the full transcript text. If you aren’t sure which quarter to query, the ā€œlatest transcriptsā€ endpoint is the quickest way to always stay up toĀ date. Cleaning and Preparing Transcript Data Raw transcripts from the API often include long paragraphs, speaker tags, and formatting artifacts. Before sending them to an LLM, it helps to organize the text into a cleaner structure. Most transcripts follow a pattern: prepared remarks from executives first, followed by a Q&A session with analysts. Separating these sections gives better control when prompting theĀ model. In Python, you can parse the transcript and strip out unnecessary characters. A simple way is to split by markers such as ā€œOperatorā€ or ā€œQuestion-and-Answer.ā€ Once separated, you can create two blocksā€Šā€”ā€ŠPrepared Remarks and Q&Aā€Šā€”ā€Šthat will later be summarized independently. This ensures the model handles each section within context and avoids missing important details. Here’s a small example of how you might start preparing theĀ data: import re# Example: using the transcript_text we fetched earliertext = transcript_text# Remove extra spaces and line breaksclean_text = re.sub(r'\s+', ' ', text).strip()# Split sections (this is a heuristic; real-world transcripts vary slightly)if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1)else: prepared, qna = clean_text, ""print("Prepared Remarks Preview:\n", prepared[:500])print("\nQ&A Preview:\n", qna[:500]) With the transcript cleaned and divided, you’re ready to feed it into Groq’s LLM. Chunking may be necessary if the text is very long. A good approach is to break it into segments of a few thousand tokens, summarize each part, and then merge the summaries in a finalĀ pass. Summarizing with GroqĀ LLM Now that the transcript is clean and split into Prepared Remarks and Q&A, we’ll use Groq to generate a crisp one-pager. The idea is simple: summarize each section separately (for focus and accuracy), then synthesize a finalĀ brief. Prompt design (concise andĀ factual) Use a short, repeatable template that pushes for neutral, investor-ready language: You are an equity research analyst. Summarize the following earnings call sectionfor {symbol} ({quarter} {year}). Be factual and concise.Return:1) TL;DR (3–5 bullets)2) Results vs. guidance (what improved/worsened)3) Forward outlook (specific statements)4) Risks / watch-outs5) Q&A takeaways (if present)Text:<<<{section_text}>>> Python: calling Groq and getting a cleanĀ summary Groq provides an OpenAI-compatible API. Set your GROQ_API_KEY and pick a fast, high-quality model (e.g., a Llama-3.1 70B variant). We’ll write a helper to summarize any text block, then run it for both sections andĀ merge. import osimport textwrapimport requestsGROQ_API_KEY = os.environ.get("GROQ_API_KEY") or "your_groq_api_key"GROQ_BASE_URL = "https://api.groq.com/openai/v1" # OpenAI-compatibleMODEL = "llama-3.1-70b" # choose your preferred Groq modeldef call_groq(prompt, temperature=0.2, max_tokens=1200): url = f"{GROQ_BASE_URL}/chat/completions" headers = { "Authorization": f"Bearer {GROQ_API_KEY}", "Content-Type": "application/json", } payload = { "model": MODEL, "messages": [ {"role": "system", "content": "You are a precise, neutral equity research analyst."}, {"role": "user", "content": prompt}, ], "temperature": temperature, "max_tokens": max_tokens, } r = requests.post(url, headers=headers, json=payload, timeout=60) r.raise_for_status() return r.json()["choices"][0]["message"]["content"].strip()def build_prompt(section_text, symbol, quarter, year): template = """ You are an equity research analyst. Summarize the following earnings call section for {symbol} ({quarter} {year}). Be factual and concise. Return: 1) TL;DR (3–5 bullets) 2) Results vs. guidance (what improved/worsened) 3) Forward outlook (specific statements) 4) Risks / watch-outs 5) Q&A takeaways (if present) Text: <<< {section_text} >>> """ return textwrap.dedent(template).format( symbol=symbol, quarter=quarter, year=year, section_text=section_text )def summarize_section(section_text, symbol="NVDA", quarter="Q2", year="2024"): if not section_text or section_text.strip() == "": return "(No content found for this section.)" prompt = build_prompt(section_text, symbol, quarter, year) return call_groq(prompt)# Example usage with the cleaned splits from Section 3prepared_summary = summarize_section(prepared, symbol="NVDA", quarter="Q2", year="2024")qna_summary = summarize_section(qna, symbol="NVDA", quarter="Q2", year="2024")final_one_pager = f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks — Key Points{prepared_summary}## Q&A Highlights{qna_summary}""".strip()print(final_one_pager[:1200]) # preview Tips that keep qualityĀ high: Keep temperature low (ā‰ˆ0.2) for factualĀ tone. If a section is extremely long, chunk at ~5–8k tokens, summarize each chunk with the same prompt, then ask the model to merge chunk summaries into one section summary before producing the final one-pager. If you also fetched headline numbers (EPS/revenue, guidance) earlier, prepend them to the prompt as brief context to help the model anchor on the right outcomes. Building the End-to-End Pipeline At this point, we have all the building blocks: the FMP API to fetch transcripts, a cleaning step to structure the data, and Groq LLM to generate concise summaries. The final step is to connect everything into a single workflow that can take any ticker and return a one-page earnings callĀ summary. The flow looks likeĀ this: Input a stock ticker (for example,Ā NVDA). Use FMP to fetch the latest transcript. Clean and split the text into Prepared Remarks andĀ Q&A. Send each section to Groq for summarization. Merge the outputs into a neatly formatted earnings one-pager. Here’s how it comes together inĀ Python: def summarize_earnings_call(symbol, quarter, year, api_key, groq_key): # Step 1: Fetch transcript from FMP url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={api_key}" resp = requests.get(url) resp.raise_for_status() data = resp.json() if not data or "content" not in data[0]: return f"No transcript found for {symbol} {quarter} {year}" text = data[0]["content"] # Step 2: Clean and split clean_text = re.sub(r'\s+', ' ', text).strip() if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1) else: prepared, qna = clean_text, "" # Step 3: Summarize with Groq prepared_summary = summarize_section(prepared, symbol, quarter, year) qna_summary = summarize_section(qna, symbol, quarter, year) # Step 4: Merge into final one-pager return f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks{prepared_summary}## Q&A Highlights{qna_summary}""".strip()# Example runprint(summarize_earnings_call("NVDA", 2, 2024, API_KEY, GROQ_API_KEY)) With this setup, generating a summary becomes as simple as calling one function with a ticker and date. You can run it inside a notebook, integrate it into a research workflow, or even schedule it to trigger after each new earningsĀ release. Free Stock Market API and Financial Statements API... Conclusion Earnings calls no longer need to feel overwhelming. With the Financial Modeling Prep API, you can instantly access any company’s transcript, and with Groq LLM, you can turn that raw text into a sharp, actionable summary in seconds. This pipeline saves hours of reading and ensures you never miss the key results, guidance, or risks hidden in lengthy remarks. Whether you track tech giants like NVIDIA or smaller growth stocks, the process is the sameā€Šā€”ā€Šfast, reliable, and powered by the flexibility of FMP’sĀ data. Summarize Any Stock’s Earnings Call in Seconds Using FMP API was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story
Share
Medium2025/09/18 14:40