Site icon R-bloggers

Comprehensive Guide: Using DoTadda Knowledge for Earnings Call Analysis

[This article was first published on Steve's Data Tips and Tricks, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
< section id="introduction" class="level1">

Introduction

Earnings call analysis is a cornerstone of financial decision-making, offering insights into a company’s performance, strategic direction, and market positioning. However, the process can be time-consuming and complex due to the volume of data and the need for nuanced interpretation. DoTadda Knowledge, an AI-powered tool, revolutionizes this process by providing advanced capabilities for analyzing earnings calls efficiently and accurately. This guide is tailored for finance professionals, portfolio managers, and financial analysts, offering a step-by-step approach to leveraging DoTadda Knowledge for superior earnings call analysis.

< section id="why-earnings-call-analysis-matters" class="level1">

Why Earnings Call Analysis Matters

  1. Transparency and Communication: Earnings calls provide direct communication from company executives, offering insights into financial performance and strategic goals.
  2. Investment Decision-Making: They are critical for making informed investment decisions, impacting both short-term trading and long-term strategies.
  3. Market Reactions: The information disclosed can significantly influence stock prices and market activity.
< section id="challenges-in-earnings-call-analysis" class="level1">

Challenges in Earnings Call Analysis

  1. Volume and Complexity of Data: The sheer amount of data in earnings call transcripts can be overwhelming.
  2. Subjectivity in Sentiment Analysis: Traditional methods often fail to capture nuanced meanings.
  3. Identifying Key Trends and Metrics: Extracting actionable insights from dense information is challenging.
  4. Detecting Risks and Policy Changes: Subtle cues about risks or policy shifts may be missed.
  5. Forecasting Market Reactions: Predicting market responses requires sophisticated analysis.
< section id="how-dotadda-knowledge-addresses-these-challenges" class="level1">

How DoTadda Knowledge Addresses These Challenges

  1. Efficient Data Processing: DoTadda Knowledge uses AI to process large volumes of text quickly, extracting meaningful insights.
  2. Advanced Sentiment Analysis: It employs context-aware sentiment analysis, providing a more accurate understanding of management tone and language.
  3. Automated Trend Identification: The tool highlights key trends, metrics, and figures automatically, saving time and effort.
  4. Risk and Policy Detection: AI algorithms detect implicit risks and policy changes with high accuracy.
  5. Market Reaction Forecasting: DoTadda Knowledge predicts market reactions using AI models trained on extensive datasets.
< section id="step-by-step-guide-to-using-dotadda-knowledge-for-earnings-call-analysis" class="level1">

Step-by-Step Guide to Using DoTadda Knowledge for Earnings Call Analysis

< section id="who-are-you-going-to-analyze" class="level2">

Who are you going to analyze?

< section id="pick-your-company" class="level3">

Pick Your Company

The first thing that needs to happen is to pick the company you want to analyze. For the purpose of this post we are going to use Nvidia, ticker symbol NVDA.

< section id="generate-questions" class="level3">

Generate Questions

Now we need to generate questions to ask DoTadda Knowledge in regards to NVDA. First, lets take a look at the prompt that is used to have you.com generate earnings call questions.

You can find the full prompt in my GitHub here: https://github.com/spsanderson/LLM_Prompts/blob/main/Earnings%20Call%20Questions.md with the full repository here: https://github.com/spsanderson/LLM_Prompts

< section id="prompt" class="level3">

Prompt

Now, here is the full prompt if you don’t want to leave this article:

# Earnings Call Questions

# Information

- Model: Calude 3.5 Sonnet
- Web Access: On
- Personaliztion: On
- Advanced Reasoning: On

# Instructions

### **Prompt for an AI Assistant**

**Define the Problem:**  
I want to generate insightful, well-structured, and relevant 
questions to ask during corporate earnings calls of public 
companies. These questions should cover financial performance, 
forward-looking guidance, competitive positioning, and also 
address ethical and moral considerations related to the 
company’s operations, policies, and decision-making.

**Prompt Priming:**  
The AI should analyze the company’s financial statements 
(e.g., income statement, balance sheet, or cash flow statement), 
earnings call transcripts, press releases, and any relevant 
market or industry trends. Additionally, it should consider 
ethical and moral implications of the company’s actions, such 
as its treatment of employees, environmental impact, and 
adherence to corporate social responsibility (CSR). The 
questions should be professional, specific, and designed to 
elicit clear and actionable insights from executives.

**Employ Prompting Techniques:**  
1. **Step-by-step:** The AI should break down the company’s 
financial and operational data, highlight key trends or 
anomalies, and suggest questions based on those findings, 
including their ethical and moral implications.  
2. **Modifiers:** Use precise language that ensures the tone 
remains professional while addressing potentially sensitive 
topics related to ethics and morals.  
3. **Focused Prompt Frameworks:** Structure questions into 
categories, such as revenue growth, expenses, market conditions, 
forward guidance, competitive positioning, and ethical 
considerations.

**Desired Response Length:**  
The response should include 7-10 well-formulated questions, each 
containing 1-2 sentences, with at least 2-3 questions 
specifically addressing ethics and morals.

**Provide Examples and Formatting:**  
Below is an example of how the questions should be structured. 
Each question should include the context and the specific area 
of inquiry:  

---

### **Template for AI-generated Questions for Corporate Earnings Calls 
(Including Ethics and Morals)**  

1. **Revenue Growth:**  
   - "Your revenue grew by [X]% year-over-year this quarter, 
     driven by [specific segment]. Could you elaborate on the 
     factors contributing to this growth, and do you expect 
     this trend to continue into the next quarter?"

2. **Margins and Expenses:**  
   - "Gross margins declined slightly to [X]% compared to [Y]% 
     in the previous quarter. Was this primarily due to 
     [specific factor, e.g., rising input costs or pricing 
     pressures]? What steps are you taking to address this?"

3. **Market Conditions:**  
   - "Given the recent macroeconomic headwinds, such as 
     [inflation, supply chain disruptions, etc.], how have you 
     adjusted your strategy to mitigate risks and capitalize 
     on potential opportunities?"

4. **Ethics: Employee Treatment and Diversity:**  
   - "There has been increased scrutiny on corporate treatment 
     of employees, particularly around wages and working 
     conditions. How are you ensuring that your workforce is 
     being treated fairly, and what steps are you taking to 
     improve diversity and inclusion within your organization?"

5. **Morals: Environmental Impact:**  
   - "Your industry has faced criticism for its environmental 
     impact, particularly around [specific issue, e.g., 
     carbon emissions, resource extraction, etc.]. Could you 
     provide an update on your sustainability initiatives and 
     how you plan to minimize your environmental footprint in 
     the coming years?"

6. **Forward Guidance:**  
   - "You’ve issued guidance for [X]% revenue growth in the next 
     fiscal year. Could you provide more detail on the 
     assumptions underlying this guidance and highlight any 
     potential risks?"

7. **Ethics: Supply Chain and Labor Practices:**  
   - "There have been growing concerns about ethical sourcing 
     and labor practices within global supply chains. Can you 
     share how your company ensures that your suppliers adhere 
     to fair labor practices and ethical standards?"

8. **Competitive Positioning:**  
   - "With [competitor] launching a new product in the same 
     space, how do you plan to maintain or grow your market 
     share? Are there any new initiatives or product 
     developments in the pipeline?"

9. **Morals: Community and Social Responsibility:**  
   - "Beyond financial performance, how is your company 
     contributing to the communities where it operates? Could 
     you elaborate on any recent or planned CSR initiatives?"

10. **Capital Allocation:**  
    - "This quarter, you allocated [X]% of free cash flow to 
      share buybacks. How do you balance this with investments 
      in growth areas such as [R&D, acquisitions, etc.]? 
      Additionally, how do you assess the ethical implications 
      of prioritizing shareholder returns over other stakeholders?"

---

**Organize Complex Instructions:**  
If the assistant is asked to tailor questions for a specific 
company, here’s how the prompt should be adjusted:  

- **[Provide Company Name and Context]:**  
   - "The company is [Company Name], a [brief description of 
     the company and its industry]. Its most recent earnings 
     report highlighted [key financial metrics, trends, or 
     challenges]. Use this information to craft tailored 
     questions."  

- **[Focus on Specific Areas]:**  
   - "Focus on questions related to [specific areas, e.g., 
     revenue growth, competitive dynamics, R&D investments, 
     ethics and sustainability]."
< section id="response" class="level3">

Response

Now, let’s generate some questions for NVDA, below is the response given:

Based on the comprehensive research provided, I'll formulate 
relevant questions for Nvidia's earnings call following the 
requested structure and focusing on key areas including 
financial performance, strategy, competition, and ethics.

### Financial Performance and Growth

1. **Data Center Revenue Growth:**  
   "Your data center segment achieved remarkable 112% year-over-year 
   growth to $30.8 billion. Given the ongoing antitrust investigations 
   in multiple regions, how do you plan to maintain this growth 
   trajectory while addressing regulatory concerns about market 
   dominance?"

2. **Software Revenue Development:**  
   "With your software and SaaS offerings approaching a $2 billion 
   annual run rate, could you elaborate on your strategy to expand 
   this high-margin recurring revenue stream, and what percentage of 
   total revenue do you expect it to represent in the next 2-3 years?"

### Strategic Initiatives and Innovation

3. **R&D Investment Strategy:**  
   "Your R&D investments have significantly increased to $8.68 billion 
   in 2024 from $2.38 billion in 2019. Could you break down how these 
   investments are allocated between hardware and software development, 
   and what key innovations we should expect in the next 12-18 months?"

4. **Supply Chain Resilience:**  
   "Given the ongoing chip shortage and your strategic partnership with 
   TSMC, what additional steps are you taking to diversify your supply 
   chain and ensure consistent chip supply for your growing AI customer 
   base?"

### Competitive Positioning and Market Dynamics

5. **Market Share Defense:**  
   "While you currently hold 70-95% market share in AI accelerators, 
   competitors like AMD and Intel are intensifying their efforts in 
   this space. How do you plan to maintain your competitive advantage, 
   particularly as AMD's Instinct MI300X gains traction with major 
   cloud providers?"

6. **Pricing Strategy:**  
   "With your H100 GPU priced at approximately $20,000 and new 
   competitors entering the market with potentially lower-priced 
   alternatives, how do you balance maintaining margins while ensuring 
   market accessibility for a broader range of customers?"

### Ethics and Sustainability

7. **Environmental Impact:**  
   "Despite increasing your renewable electricity use to 76% in FY24, 
   you haven't yet achieved your target of 100% renewable energy by 
   FY25. What specific challenges are you facing in reaching this 
   goal, and how do you plan to address them?"

8. **Supply Chain Ethics:**  
   "Given your significant reliance on semiconductor manufacturing 
   partners, how do you ensure ethical labor practices and 
   environmental standards throughout your supply chain, and what 
   specific metrics do you use to measure compliance?"

9. **AI Ethics and Responsibility:**  
   "As the dominant provider of AI chips, what governance frameworks 
   and ethical guidelines have you implemented to ensure responsible 
   development and deployment of AI technologies by your customers?"

### Forward-Looking Guidance

10. **Growth Sustainability:**  
    "With analyst projections suggesting potential annual revenues of 
    $111.3 billion by 2025, what are the key assumptions underlying 
    these growth expectations, and how are you preparing for potential 
    market cyclicality or economic headwinds?"

These questions address crucial aspects of Nvidia's business while 
incorporating recent developments, market dynamics, and ethical 
considerations. They are designed to elicit meaningful responses 
about both current performance and future strategic direction.

Here is a link to the public transcript: https://you.com/search?q=Nvidia&cid=c1_72a3a524-6049-490d-8548-e613ca6fc004&tbm=youchat

< section id="ask-dotadda-knowledge" class="level3">

Ask DoTadda Knowledge

Now, let’s take the first question and pose it to the Knowledge Platform, we will of course have to place the ticker symbol in the front, otherwise how will it know who we are talking about, that, is the only thing we need to change.

< section id="the-question-on-nvda" class="level4">

The Question on NVDA

“NVDA: Your data center segment achieved remarkable 112% year-over-year growth to $30.8 billion. Given the ongoing antitrust investigations in multiple regions, how do you plan to maintain this growth trajectory while addressing regulatory concerns about market dominance?”

< section id="the-response-from-dotadda-knowledge" class="level4">

The Response from DoTadda Knowledge

Original Question

NVDA: Your data center segment achieved remarkable 112% year-over-year growth to $30.8 billion. Given the ongoing antitrust investigations in multiple regions, how do you plan to maintain this growth trajectory while addressing regulatory concerns about market dominance?

Extracted Companies

NVIDIA Corporation (NVDA)

Selected Transcripts:

NVDA: Q3 Earnings Call, 2024-11-20, Q3 2025 NVDA: Q2 Earnings Call, 2024-08-28, Q2 2025 NVDA: Q1 Earnings Call, 2024-05-22, Q1 2025 NVDA: Q4 Earnings Call, 2024-02-21, Q4 2024 NVDA: Q3 Earnings Call, 2023-11-21, Q3 2024

Final Answer 🤔 Thinking…

NVIDIA is successfully maintaining strong data center growth while actively addressing regulatory challenges through geographic diversification, product adaptation, and strategic focus on sovereign AI initiatives. The company’s data center segment achieved $30.8 billion in revenue with 112% year-over-year growth, despite significant regulatory headwinds in key markets.

Key Metrics and Trends

In-Depth Analysis Regulatory Compliance Strategy

NVIDIA has implemented a comprehensive approach to address regulatory challenges while maintaining growth momentum. The company is actively developing market-specific products that comply with export controls, particularly for the Chinese market.

“Following the government’s clear guidelines, we are working to expand our data center product portfolio to offer compliant solutions for each regulatory category”

— Colette Kress, NVDA Q3 2024

Geographic Diversification Through Sovereign AI

NVIDIA is actively expanding its geographic footprint through sovereign AI initiatives, which has become a significant growth driver offsetting regulatory constraints in China.

“From nothing the previous year, we believe sovereign AI revenue can approach the high single-digit billions this year. The importance of AI has caught the attention of every nation.”

— Colette Kress, NVDA Q1 2025

Product Innovation and Market Adaptation

The company is developing new products specifically designed to meet regulatory requirements while maintaining competitive performance levels.

“We have started shipping alternatives that don’t require a license for the China market.”

— Colette Kress, NVDA Q4 2024

Growth Sustainability Strategy

NVIDIA’s growth strategy focuses on three key elements to maintain momentum despite regulatory challenges:

Market Diversification Impact

The company has successfully offset regulatory impacts through strong growth in other regions and market segments:

“We expect that our sales to these destinations will decline significantly in the fourth quarter, though we believe they’ll be more than offset by strong growth in other regions.”

— Colette Kress, NVDA Q3 2024

Supporting Evidence

“The world has reached a tipping point of new computing era. The trillion-dollar installed base of Data Center infrastructure is rapidly transitioning from general purpose to accelerated computing.”

— Colette Kress, NVDA Q4 2024

“Countries around the world are investing in AI infrastructure to support the building of large language models in their own language on domestic data and in support of their local research and enterprise ecosystems.”

— Colette Kress, NVDA Q4 2024

Evidence NVDA: Q3 Earnings Call, 2024-11-20, Q3 2025

“Thank you, Stuart. Q3 was another record quarter. We continue to deliver incredible growth. Revenue of $35,100,000,000 was up 17% sequentially and up 94% year on year and well above our outlook of $32,500,000,000 All market platforms posted strong sequential and year over year growth, fueled by the adoption of NVIDIA accelerated computing and AI. Starting with data center, another record was achieved in data center.”

“Revenue of $30,800,000,000 up 17% sequential and up 112% year on year. NVIDIA Hopper demand is exceptional and sequentially NVIDIA H200 sales increased significantly to double digit billions, the fastest product ramp in our company’s history. The H200 delivers up to 2x faster inference performance and up to 50% improved TCO. Cloud service providers were approximately half of our data center sales with revenue increasing more than 2x year on year. CSPs deployed NVIDIA H200 infrastructure and high speed networking with installations scaling to tens of thousands of GPUs to grow their business and serve rapidly rising demand for AI training and inference workloads.”

“Foxconn, the world’s largest electronics manufacturer is using digital twins and industrial AI built on NVIDIA Omniverse to speed the bring up of its Blackwell factories and drive new levels of efficiency. In its Mexico facility alone, Foxconn expects to reduce a reduction of over 30% in annual kilowatt hour usage. From a geographic perspective, our data center revenue in China grew sequentially due to shipments of export compliant hopper products to industries. As a percentage of total data center revenue, it remains well below levels prior to the onset of export controls. We expect the market in China to remain very competitive going forward.”

“We will continue to comply with export controls while serving our customers. Our Sovereign AI initiatives continue to gather momentum as countries embrace NVIDIA accelerated computing for a new industrial revolution powered by AI. India’s leading CSPs, including Tata Communications and Zoda Data Services, are building AI factories for tens of thousands of NVIDIA GPUs. By year end, they will have boosted NVIDIA GPU deployments in the country by nearly 10x. Infosys, TSC, Wipro are adopting NVIDIA AI Enterprise and up skilling nearly half a 1000000 developers and consultants to help clients build and run AI agents on our platform.”

“Okay, moving to the rest of the P and L. GAAP gross margin was 74.6% and non GAAP gross margin was 75%, down sequentially, primarily driven by a mix shift of the H100 systems to more complex and higher cost systems within data center. Sequentially, GAAP operating expenses and non GAAP operating expenses were up 9% due to higher compute, infrastructure and engineering development costs for new product introductions. In Q3, we returned $11,200,000,000 to shareholders in the form of share repurchases and cash dividends. So let me turn to the outlook for the Q4.”

“Total revenue is expected to be $37,500,000,000 plus or minus 2%, which incorporates continued demand for hopper architecture and the initial ramp of our Blackwell products. While demand is greatly exceed supply, we are on track to exceed our previous Blackwell revenue estimate of several $1,000,000,000 as our visibility into supply continues to increase. On gaming, although sell through was strong in Q3, we expect 4th quarter revenue to decline sequentially due to supply constraints. GAAP and non GAAP gross margins are expected to be 73% and 73.5%, respectively, plus or minus 50 basis points. Blackwell is a customizable AI infrastructure with 7 different types of NVIDIA built chips, multiple network options and for air and liquid cooled data centers.”

“But it’s also really important to realize that when we’re able to increase performance and do so at X factors at a time, we’re reducing the cost of training, we’re reducing the cost of inferencing, we’re reducing the cost of AI, so that it could be much more accessible. But the other factor that’s very important to note is that when there’s a data center of some fixed size and a data center always is some fixed size. It could be of course tens of megawatts in the past and now it’s most data centers are now 100 megawatts to several 100 megawatts and we’re planning on gigawatt data centers. It doesn’t really matter how large the data centers are. The power is limited.”

“Or is it just too premature to discuss that because you’re just at the start of Blackwell? So how many quarters of shipments do you think is required to kind of satisfy this 1st wave? Can you continue to grow this into calendar 2020 6? Just how should we be prepared to see what we have seen historically, right, the periods of digestion along the way of a long term kind of secular hardware deployment? Okay. Vivek, thank you for the question. Let me clarify your question regarding gross margins. Could we reach the mid-70s in the second half of next year? And yes, I think it is reasonable assumption or goal for us to do, but we’ll just have to see how that mix of ramp goes. But yes, it is definitely possible.”

“Hi, guys. Thanks for taking my questions. Colette, I had a clarification and a question for you. The clarification just when you say low 70s gross margins, is 73.5 count is low 70s or do you have something else in mind? And for my question, you’re guiding total revenues and so I mean total data center revenues in the next quarter must be up several $1,000,000,000 but it sounds like Blackwell now should be up more than that. But you also said Hopper was still strong. So like is Hopper down sequentially next quarter? And if it is like why? Is it because of the supply constraints? Is China has been pretty strong.”

“So first starting in terms of Sovereign AI, such an important part of growth, something that has really surfaced with the onset of generative AI and building models in the individual countries around the world. And we see a lot of them, and we talked about a lot of them in the call today and the work that they are doing. So our Sovereign AI and our pipeline going forward is still absolutely intact as those are working to build these foundational models in their own language, in their own culture and working in terms of the enterprises within those countries. And I think you’ll continue to see this be a growth opportunities that you may see with our regional clouds that are being stood up and or those that are focusing in terms of AI factories for many parts of the Sovereign AI”

“We got 1 quarter at a time. We are working right now on the quarter that we’re in and building what we need to ship in terms of Blackwell. We have every supplier on the planet working seamlessly with us to do that. And once we get to next quarter, we’ll help you understand in terms of that ramp that we’ll see to the next quarter going after that. Whatever the new administration decides, we will of course support the administration. And that’s our the highest mandate. And then after that, do the best we can and just as we always do. And so we have to simultaneously and we will comply with any regulation that comes along fully and support our customers to the best of our abilities and compete in the marketplace. We’ll do all of these three things simultaneously.”

“Generative AI is not just a new software capability, but a new industry with AI factories manufacturing digital intelligence, a new industrial revolution that can be create that can create a multi $1,000,000,000,000 AI industry. Demand for hopper and anticipation for Blackwell, which is now in full production are incredible for several reasons. There are more foundation model makers now than there were a year ago. The computing scale of pre training and post training continues to grow exponentially. There are more AI native startups than ever and the number of successful inference services is rising.”

NVDA: Q2 Earnings Call, 2024-08-28, Q2 2025

“Thanks, Stuart. Q2 was another record quarter. Revenue of $30,000,000,000 was up 15% sequentially and up 122% year on year and well above our outlook of 28,000,000,000 Starting with data center. Data center revenue of 26,300,000,000 was a record, up 16% sequentially and up 154% year on year, driven by strong demand for NVIDIA Hopper, GPU computing and our networking platforms. Compute revenue grew more than 2.5x.”

“Networking revenue grew more than 2x from the last year. Cloud service providers represented roughly 45% of our data center revenue and more than 50% stemmed from the consumer Internet and enterprise companies. Customers continue to accelerate their hopper architecture purchases, while gearing up to adopt Blackwell. Key workloads driving our data center growth include generative AI, model training and inferencing, video, image and text data pre and post processing with CUDA and AI workloads, synthetic data generation, AI powered recommender systems, SQL and vector database processing as well. Next generation models will require 10 to 20 times more compute to train with significantly more data.”

“The trend is expected to continue. Over the trailing 4 quarters, we estimate that inference drove more than 40% of our data center revenue. CSPs, consumer Internet companies and enterprises benefit from the incredible throughput and efficiency of NVIDIA’s inference platform. Demand for NVIDIA is coming from frontier model makers, consumer Internet services and tens of thousands of companies and startups building generative AI applications for consumers, advertising, education, enterprise and healthcare and robotics. Developers desire NVIDIA’s rich ecosystem and availability in every cloud.”

“CSPs appreciate the broad adoption of NVIDIA and are growing their NVIDIA capacity given the high demand. NVIDIA H200 platform began ramping in Q2, shipping to large CSPs, consumer Internet and enterprise company. The NVIDIA H200 builds upon the strength of our Hopper architecture and offering over 40% more memory bandwidth compared to the H100. Our data center revenue in China grew sequentially in Q2 and is significant contributor to our data center revenue. As a percentage of total data center revenue, it remains below levels seen prior to the imposition of export controls.”

“Spectrum X has broad market support from OEM and ODM partners and is being adopted by CFPs, GPU Cloud Providers and Enterprise, including XAI to connect the largest GPU compute a We plan to launch new Spectrum X products every year to support demand for scaling compute clusters from tens of thousands of GPUs today to millions of GPUs in the near future. Spectrum X is well on track to begin a multi $1,000,000,000 product line within a year. Our Sovereign AI opportunities continue to expand as countries recognize AI expertise and infrastructure at national imperatives for their society and industries. Japan’s National Institute of Advanced Industrial Science and Technology is building its AI bridging cloud infrastructure 3.0 supercomputer with NVIDIA”

“Total revenue is expected to be $32,500,000,000 plus or minus 2%. Our 3rd quarter revenue outlook incorporates continued growth of our hopper architecture and sampling of our Blackwell products. We expect Blackwell production ramp in Q4. GAAP and non GAAP gross margins are expected to be 74.4% 75%, respectively, plus or minus 50 basis points. As our data center mix continues to shift to new products, we expect this trend to continue into the Q4 of fiscal 2025.”

“Yes. Hey, thanks a lot for the question, Jensen and Colette. I wanted to ask about the geographies. There was the 10 Q that came out and the United States was down sequentially, while several Asian geographies were up a lot sequentially. Just wondering what the dynamics are there? And obviously, China did very well. You mentioned in your remarks, what are the puts and takes? And then I just wanted to clarify from Stacy’s question, if that means the sequential overall revenue growth rates for the company accelerate in the Q4, given all those favorable revenue dynamics? Thanks.”

“These are just moving to our OEMs or ODMs and our system integrators for the most part across our product portfolio. So what you’re seeing there is sometimes just a swift shift in terms of who they are using to complete their full configuration before those things are going into the data center, going into notebooks and those pieces of it. And that shift happens from time to time. But yes, our China number there are invoicing to China. Keep in mind that is incorporating both gaming, also data center, also automotive in those numbers that we have.”

“And Toshiya, to answer your question, regarding Sovereign AI and our goals in terms of growth, in terms of revenue, it certainly is a unique and growing opportunity, something that surfaced with generative AI and the desires of countries around the world to have their own generative AI that would be able to incorporate their own language, incorporate their own culture, incorporate their own data in that country. So more and more excitement around these models and what they can be specific for those countries. So yes, we are seeing some growth opportunity in front of us. And your next question comes from the line of Joe Moore with Morgan Stanley. Your line is open.”

“These are all very large scale applications have now evolved to generative AI. Of course, the number of generative AI startups is generating tens of 1,000,000,000 of dollars of cloud renting opportunities for our cloud partners and Sovereign AI, countries that are now realizing that their data is their natural and national resource and they have to use AI, build their own AI infrastructure so that they could have their own digital intelligence. Enterprise AI, as Colette mentioned earlier, is starting and you might have seen our announcement that the world’s leading IT companies are joining us to take the NVIDIA AI enterprise platform to the world’s enterprises”

“Thank you. Let me make a couple of comments that I made earlier again. The data center worldwide are in full steam to modernize the entire computing stack with accelerated computing and generative AI. Hopper demand remains strong and the anticipation for Blackwell is incredible. Let me highlight the top five things of our company.”

“Chatbots, coding AIs and image generators are growing fast, but it’s just the tip of the iceberg. Internet services are deploying generative AI for large scale recommenders, ad targeting and search systems. AI startups are consuming tens of 1,000,000,000 of dollars yearly of CSP’s cloud capacity and countries are recognizing the importance of AI and investing in sovereign AI infrastructure. And NVIDIA AI and NVIDIA Omniverse is opening up the next era of AI, general robotics. And now the enterprise AI wave has started and we’re poised to help companies transform their businesses.”

NVDA: Q4 Earnings Call, 2024-02-21, Q4 2024

“NVDA – Earnings call 4 2024 > Management Discussion > Speaker: Colette Kress Starting with Data Center. Data Center revenue for the fiscal 2024 year was $47.5 billion, more than tripling from the prior year. The world has reached a tipping point of new computing era. The trillion-dollar installed base of Data Center infrastructure is rapidly transitioning from general purpose to accelerated computing. As Moore’s Law slows while computing demand continues to skywalk, companies may accelerate every workload possible to drive future improvement in performance, TCO and energy efficiency. At the same time, companies have started to build the next generation of modern Data Centers, what we refer to as AI factories, purpose-built to refine raw data and produce valuable intelligence in the era of generative AI.”

“NVDA – Earnings call 4 2024 > Management Discussion > Speaker: Colette Kress In the fourth quarter, Data Center revenue of $18.4 billion was a record, up 27% sequentially and up 409% year-on-year, driven by the NVIDIA Hopper GPU computing platform, along with InfiniBand end-to-end networking. Compute revenue grew more than 5x and networking revenue tripled from last year. We are delighted that supply of Hopper architecture products is improving. Demand for Hopper remains very strong. We expect our next generation products to be supply constrained as demand far exceeds supply.”

“NVDA – Earnings call 4 2024 > Management Discussion > Speaker: Colette Kress Fourth quarter Data Center growth was driven by both training and inference of generative AI and large language models across a broad set of industries, use cases and regions. The versatility and leading performance of our Data Center platform enables a high return on investment for many use cases, including AI training and inference, data processing and a broad range of CUDA accelerated workloads. We estimate in the past year, approximately 40% of Data Center revenue was for AI inference.”

“NVDA – Earnings call 4 2024 > Management Discussion > Speaker: Colette Kress Shifting to our Data Center revenue by geography. Growth was strong across all regions except for China, where our Data Center revenue declined significantly following the U.S. government export control regulations imposed in October. Although we have not received licenses from the U.S. government to ship restricted products to China, we have started shipping alternatives that don’t require a license for the China market. China represented a mid-single-digit percentage of our Data Center revenue in Q4, and we expect it to stay in a similar range in the first quarter.”

“NVDA – Earnings call 4 2024 > Management Discussion > Speaker: Colette Kress In regions outside of the U.S. and China, sovereign AI has become an additional demand driver. Countries around the world are investing in AI infrastructure to support the building of large language models in their own language on domestic data and in support of their local research and enterprise ecosystems.”

“NVDA – Earnings call 4 2024 > Question and Answer > Speaker: Jensen Huang Okay, yes. Well, you know we guide 1 quarter at a time. But fundamentally, the conditions are excellent for continued growth, calendar ’24 to calendar ’25 and beyond, and let me tell you why. We’re at the beginning of 2 industry-wide transitions, and both of them are industry-wide. The first one is a transition from general to accelerated computing. General-purpose computing, as you know, is starting to run out of steam. And you could tell by the CSPs extending and many data centers, including our own for general-purpose computing, extending the depreciation from 4 to 6 years. There’s just no reason to update with more GPUs when you can’t fundamentally and dramatically enhance its throughput like you used to.”

“NVDA – Earnings call 4 2024 > Question and Answer > Speaker: Jensen Huang And of course, sovereign AI. The reason for sovereign AI has to do with the fact that the language, the knowledge, the history, the culture of each region are different, and they own their own data. They would like to use their data, train it with to create their own digital intelligence and provision it to harness that raw material themselves. It belongs to them. Each one of the regions around the world, the data belongs to them. The data is most useful to their society. And so they want to protect the data, they want to transform it themselves, value-added transformation into AI and provision those services themselves.”

“NVDA – Earnings call 4 2024 > Question and Answer > Speaker: Jensen Huang So we’re seeing sovereign AI infrastructure is being built in Japan, in Canada, in France, so many other regions. And so my expectation is that what is being experienced here in the United States, in the West will surely be replicated around the world. And these AI generation factories are going to be in every industry, every company, every region. And so I think the last – this last year, we’ve seen generative AI really becoming a whole new application space, a whole new way of doing computing, a whole new industry is being formed, and that’s driving our growth.”

“NVDA – Earnings call 4 2024 > Question and Answer > Speaker: Jensen Huang At the core, remember, the U.S. government wants to limit the latest capabilities of NVIDIA’s accelerated computing and AI to the Chinese market. And the U.S. government would like to see us be as successful in China as possible. Within those two constraints, within those two pillars, if you will, are the restrictions. And so we had to pause when the new restrictions came out. We immediately paused, so that we understood what the restrictions are, reconfigured our products in a way that is not software hackable in any way. And that took some time. And so we reset our product offering to China, and now we’re sampling to customers in China.”

“NVDA – Earnings call 4 2024 > Question and Answer > Speaker: Jensen Huang And we’re going to do our best to compete in that marketplace and succeed in that marketplace within the specifications of the restriction. And so that’s it. This last quarter, we – our business significantly declined as we paused in the marketplace. We stopped shipping in the marketplace. We expect this quarter to be about the same. But after that, hopefully, we can go compete for our business and do our best, and we’ll see how it turns out.”

“NVDA – Earnings call 4 2024 > Question and Answer > Speaker: Jensen Huang The computer industry is making two simultaneous platform shifts at the same time. The trillion-dollar installed base of data centers is transitioning from general purpose to accelerated computing. Every data center will be accelerated so the world can keep up with the computing demand with increasing throughput while managing cost and energy. The incredible speed-up of NVIDIA enabled – that NVIDIA enabled a whole new computing paradigm, generative AI, where software can learn, understand and generate any information from human language to the structure of biology and the 3D world.”

“NVDA – Earnings call 4 2024 > Question and Answer > Speaker: Jensen Huang This new AI infrastructure will open up a whole new world of applications not possible today. We started the AI journey with the hyperscale cloud providers and consumer Internet companies. And now every industry is on board, from automotive to health care to financial services to industrial to telecom, media and entertainment. NVIDIA’s full stack computing platform with industry-specific application frameworks and a huge developer and partner ecosystem gives us the speed, scale, and reach to help every company, to help companies in every industry become an AI company. We have so much to share with you at next month’s GTC in San Jose, so be sure to join us. We look forward to updating you on our progress next quarter.”

NVDA: Q3 Earnings Call, 2023-11-21, Q3 2024

“NVDA – Earnings call 3 2024 > Management Discussion > Speaker: Colette Kress Thanks, Simona. Q3 was another record quarter. Revenue of $18.1 billion was up 34% sequentially and up more than 200% year-on-year and well above our outlook of $16 billion. Starting with data center. The continued ramp of the NVIDIA HGX platform based on our Hopper Tensor Core GPU architecture, along with InfiniBand end-to-end networking, drove record revenue of $14.5 billion, up 41% sequentially and up 279% year-on-year.”

“NVDA – Earnings call 3 2024 > Management Discussion > Speaker: Colette Kress NVIDIA HGX with InfiniBand together are essentially the reference architecture for AI supercomputers and data center infrastructures. Some of the most exciting generative AI applications are built and run on NVIDIA, including Adobe, Firefly, ChatGPT, Microsoft 365 Copilot, CoAssist, Now Assist with ServiceNow and Zoom AI Companion. Our data center compute revenue quadrupled from last year and networking revenue nearly tripled. Investment in infrastructure for training and inferencing large language models, deep learning recommender systems and generative AI applications is fueling strong broad-based demand for NVIDIA accelerated computing. Inferencing is now a major workload for NVIDIA AI computing.”

“NVDA – Earnings call 3 2024 > Management Discussion > Speaker: Colette Kress Toward the end of the quarter, the U.S. government announced a new set of export control regulations for China and other markets, including Vietnam and certain countries in the Middle East. These regulations require licenses for the export of a number of our products including our Hopper and Ampere 100 and 800 series and several others. Our sales to China and other affected destinations derived from products that are now subject to licensing requirements have consistently contributed approximately 20% to 25% of data center revenue over the past few quarters. We expect that our sales to these destinations will decline significantly in the fourth quarter, though we believe they’ll be more than offset by strong growth in other regions.”

“NVDA – Earnings call 3 2024 > Management Discussion > Speaker: Colette Kress The U.S. government designed the regulation to allow the U.S. industry to provide data center compute products to markets worldwide, including China, continuing to compete worldwide as the regulations encourage, promote U.S. technology leadership, spurs economic growth and support U.S. jobs. For the highest performance levels, the government requires licenses. For lower performance levels, the government requires a streamlined prior notification process. And for products even lower performance levels, the government does not require any notice at all. Following the government’s clear guidelines, we are working to expand our data center product portfolio to offer compliant solutions for each regulatory category, including products for which the U.S”

“NVDA – Earnings call 3 2024 > Question and Answer > Speaker: Colette Kress So first, let me start with your question, Vivek, on export controls and the impact that we are seeing in our Q4 outlook and guidance that we provided. We had seen historically over the last several quarters that China and some of the other impacted destinations to be about 20% to 25% of our data center revenue. We are expecting in our guidance for that to decrease substantially as we move into Q4.”

“NVDA – Earnings call 3 2024 > Question and Answer > Speaker: Colette Kress The export controls will have a negative effect on our China business, and we do not have good visibility into the magnitude of that impact even over the long term. We are, though, working to expand our data center product portfolio to possibly offer new regulation-compliant solutions that do not require a license. These products, they may become available in the next coming months. However, we don’t expect their contribution to be material or meaningful as a percentage of the revenue in Q4.”

“NVDA – Earnings call 3 2024 > Question and Answer > Speaker: Jensen Huang We’re seeing AI factories being built out everywhere in just about every country. And so if you look at the way – where we are in the expansion, the transition into this new computing approach, the first wave, you saw with large language model start-ups, generative AI start-ups and consumer Internet companies. And we’re in the process of ramping that. Meanwhile, while that’s being ramped, you see that we’re starting to partner with enterprise software companies who would like to build chatbots and copilots and assistants to augment the tools that they have on their platforms.”

“NVDA – Earnings call 3 2024 > Question and Answer > Speaker: Jensen Huang NVIDIA H100 HGX with InfiniBand and the NVIDIA AI software stack define an AI factory today. As we expand our supply chain to meet the world’s demand, we are also building new growth drivers for the next wave of AI. We highlighted 3 elements to our new growth strategy that are hitting their stride, CPU, networking and software and services. Grace is NVIDIA’s first data center CPU. Grace and Grace Hopper are in full production and ramping into a new multibillion-dollar product line next year. Irrespective of the CPU choice, we can help customers build an AI factory. NVIDIA networking now exceeds a $10 billion annualized revenue run rate. InfiniBand grew fivefold year-over-year and is positioned for excellent growth ahead as the networking of AI factories.”

“NVDA – Earnings call 3 2024 > Question and Answer > Speaker: Jensen Huang Enterprises are also racing to adopt AI, and Ethernet is the standard networking. This week, we announced an Ethernet for AI platform for enterprises. NVIDIA Spectrum-X is an end-to-end solution of BlueField SuperNIC, Spectrum-4 Ethernet switch and software that boosts Ethernet performance by up to 1.6x for AI workloads. Dell, HPE and Lenovo have joined us to bring a full generative AI solution of NVIDIA AI computing, networking and software to the world’s enterprises.”

“NVDA – Earnings call 3 2024 > Question and Answer > Speaker: Jensen Huang NVIDIA software and services is on track to exit the year at an annualized run rate of $1 billion. Enterprise software platforms like ServiceNow and SAP need to build and operate proprietary AI. Enterprises need to build and deploy custom AI copilots. We have the AI technology, expertise and scale to help customers build custom models. With their proprietary data on NVIDIA DGX Cloud and deploy the AI applications on enterprise-grade NVIDIA AI Enterprise, NVIDIA is essentially an AI foundry. NVIDIA’s GPUs, CPUs, networking, AI foundry services and NVIDIA AI Enterprise software are all growth engines in full throttle.”

“NVDA – Earnings call 3 2024 > Management Discussion > Speaker: Colette Kress We are working with some customers in China and the Middle East to pursue licenses from the U.S. government. It is too early to know whether these will be granted for any significant amount of revenue.”

“NVDA – Earnings call 3 2024 > Management Discussion > Speaker: Colette Kress Many countries are awakening to the need to invest in sovereign AI infrastructure to support economic growth and industrial innovation. With investments in domestic compute capacity, nations can use their own data to train LLMs and support their local generative AI ecosystems. For example, we are working with India’s government and largest tech companies, including Infosys, Reliance and Tata to boost their sovereign AI infrastructure. And French private cloud provider, Scaleway is building a regional AI cloud based on NVIDIA H100, InfiniBand and NVIDIA AI Enterprise software to fuel advancement across France and Europe”

NVDA: Q1 Earnings Call, 2024-05-22, Q1 2025

“NVDA – Earnings call Q1 2025 > Management Discussion > Speaker: Colette Kress Thanks, Simona. Q1 was another record quarter. Revenue of $26 billion was up 18% sequentially and up 262% year-on-year and well above our outlook of $24 billion. Starting with Data Center. Data Center revenue of $22.6 billion was a record, up 23% sequentially and up 427% year-on-year, driven by continued strong demand for the NVIDIA Hopper GPU computing platform. Compute revenue grew more than 5x and networking revenue more than 3x from last year. Strong sequential data center growth was driven by all customer types, led by enterprise and consumer Internet companies. Large cloud providers continue to drive strong growth as they deploy and ramp NVIDIA AI infrastructure at scale and represented the mid-40s as a percentage of our Data Center revenue.”

“NVDA – Earnings call Q1 2025 > Management Discussion > Speaker: Colette Kress Training and inferencing AI on NVIDIA CUDA is driving meaningful acceleration in cloud rental revenue growth, delivering an immediate and strong return on cloud provider’s investment. For every $1 spent on NVIDIA AI infrastructure, cloud providers have an opportunity to earn $5 in GPU instant hosting revenue over 4 years. NVIDIA’s rich software stack and ecosystem and tight integration with cloud providers makes it easy for end customers up and running on NVIDIA GPU instances in the public cloud.”

“NVDA – Earnings call Q1 2025 > Management Discussion > Speaker: Colette Kress From a geographic perspective, Data Center revenue continues to diversify as countries around the world invest in sovereign AI. Sovereign AI refers to a nation’s capabilities to produce artificial intelligence using its own infrastructure, data, workforce, and business networks. Nations are building up domestic computing capacity through various models. Some are procuring and operating sovereign AI clouds in collaboration with state-owned telecommunication providers or utilities. Others are sponsoring local cloud partners to provide a shared AI computing platform for public and private sector use.”

“NVDA – Earnings call Q1 2025 > Management Discussion > Speaker: Colette Kress NVIDIA’s ability to offer end-to-end compute to networking technologies, full stack software, AI expertise, and rich ecosystem of partners and customers allows sovereign AI and regional cloud providers to jumpstart their country’s AI ambitions. From nothing the previous year, we believe sovereign AI revenue can approach the high single-digit billions this year. The importance of AI has caught the attention of every nation.”

“NVDA – Earnings call Q1 2025 > Management Discussion > Speaker: Colette Kress We ramped new products designed specifically for China that don’t require a port control license. Our Data Center revenue in China is down significantly from the level prior to the imposition of the new export control restrictions in October. We expect the market in China to remain very competitive going forward. From a product perspective, the vast majority of compute revenue was driven by our Hopper GPU architecture. Demand for Hopper during the quarter continues to increase. Thanks to CUDA algorithm innovations, we’ve been able to accelerate LLM inference on H100 by up to 3x, which can translate to a 3x cost reduction for serving popular models like Llama 3.”

“NVDA – Earnings call Q1 2025 > Management Discussion > Speaker: Jensen Huang Strong and accelerated demand – accelerating demand for generative AI training and inference on Hopper platform propels our Data Center growth. Training continues to scale as models learn to be multimodal, understanding text, speech, images, video, and 3D and learn to reason and plan. Our inference workloads are growing incredibly. With generative AI, inference, which is now about fast token generation at massive scale, has become incredibly complex. Generative AI is driving a from-foundation-up full stack computing platform shift that will transform every computer interaction.”

“NVDA – Earnings call Q1 2025 > Management Discussion > Speaker: Jensen Huang Token generation will drive a multiyear build-out of AI factories. Beyond cloud service providers, generative AI has expanded to consumer Internet companies and enterprise, sovereign AI, automotive, and health care customers, creating multiple multibillion-dollar vertical markets. The Blackwell platform is in full production and forms the foundation for trillion-parameter scale generative AI. The combination of Grace CPU, Blackwell GPUs, NVLink, Quantum, Spectrum, mix and switches, high-speed interconnects and a rich ecosystem of software and partners let us expand and offer a richer and more complete solution for AI factories than previous generations.”

“NVDA – Earnings call Q1 2025 > Question and Answer > Speaker: Toshiya Hari Jensen, I wanted to ask about competition. I think many of your cloud customers have announced new or updates to their existing internal programs, right, in parallel to what they’re working on with you guys. To what extent did you consider them as competitors, medium to long term? And in your view, do you think they’re limited to addressing most internal workloads or could they be broader in what they address going forward?”

“NVDA – Earnings call Q1 2025 > Question and Answer > Speaker: Jensen Huang We’re different in several ways. First, NVIDIA’s accelerated computing architecture allows customers to process every aspect of their pipeline from unstructured data processing to prepare it for training, to structured data processing, data frame processing like SQL to prepare for training, to training to inference.”

“NVDA – Earnings call Q1 2025 > Question and Answer > Speaker: Jensen Huang And as I was mentioning in my remarks, that inference has really fundamentally changed, it’s now generation. It’s not trying to just detect the cat, which was plenty hard in itself, but it has to generate every pixel of a cat. And so the generation process is a fundamentally different processing architecture. And it’s one of the reasons why TensorRT LLM was so well received. We improved the performance in using the same chips on our architecture by a factor of 3. That kind of tells you something about the richness of our architecture and the richness of our software.”

“NVDA – Earnings call Q1 2025 > Question and Answer > Speaker: Jensen Huang So one, you could use NVIDIA for everything, from computer vision to image processing, to computer graphics to all modalities of computing. And as the world is now suffering from computing cost and computing energy inflation because general-purpose computing has run its course, accelerated computing is really the sustainable way of going forward. So accelerated computing is how you’re going to save money in computing, is how you’re going to save energy in computing. And so the versatility of our platform results in the lowest TCO for their data center.”

“NVDA – Earnings call Q1 2025 > Question and Answer > Speaker: Jensen Huang And so I think the pace of innovation that we’re bringing will drive up the capability, on the one hand, and drive down the TCO on the other hand. And so we should be able to scale out with the NVIDIA architecture for this new era of computing and start this new industrial revolution where we manufacture not just software anymore, but we manufacture artificial intelligence tokens, and we’re going to do that at scale. Thank you.”

< section id="how-it-works" class="level2">

How it works

< section id="preparation" class="level2">

1. Preparation

< section id="upload-and-process-data" class="level2">

2. Upload and Process Data

< section id="sentiment-analysis" class="level2">

3. Sentiment Analysis

< section id="trend-and-metric-identification" class="level2">

4. Trend and Metric Identification

< section id="risk-and-policy-analysis" class="level2">

5. Risk and Policy Analysis

< section id="market-reaction-forecasting" class="level2">

6. Market Reaction Forecasting

< section id="integration-with-financial-models" class="level2">

7. Integration with Financial Models

< section id="post-call-review" class="level2">

8. Post-Call Review

< section id="best-practices-for-using-dotadda-knowledge" class="level1">

Best Practices for Using DoTadda Knowledge

  1. Define Clear Objectives: Know what you want to achieve with the tool.
  2. Customize Models: Tailor the tool’s settings to your specific needs.
  3. Train Your Team: Provide training to ensure effective use of the tool.
  4. Monitor and Optimize: Continuously evaluate the tool’s performance and make necessary adjustments.
< section id="benefits-of-using-dotadda-knowledge" class="level1">

Benefits of Using DoTadda Knowledge

  1. Time Efficiency: Automates labor-intensive tasks, saving hours of manual work.
  2. Enhanced Accuracy: Reduces errors in sentiment and trend analysis.
  3. Actionable Insights: Provides clear, actionable insights for decision-making.
  4. Scalability: Handles large volumes of data, making it suitable for firms of all sizes.
< section id="key-takeaways" class="level1">

Key Takeaways

< section id="conclusion" class="level1">

Conclusion

Earnings call analysis is a vital component of financial analysis, and DoTadda Knowledge offers a cutting-edge solution to streamline this process. By leveraging its AI-powered capabilities, finance professionals, portfolio managers, and analysts can gain deeper insights, make informed decisions, and stay ahead in the competitive financial landscape. Start integrating DoTadda Knowledge into your workflow today to unlock its full potential.

< section id="faqs" class="level1">

FAQs

  1. What is DoTadda Knowledge?
    • DoTadda Knowledge is an AI-powered tool designed to enhance earnings call analysis by automating data processing and providing actionable insights.
  2. How does DoTadda Knowledge improve sentiment analysis?
    • It uses advanced AI algorithms to understand context and tone, offering more accurate sentiment evaluations than traditional methods.
  3. Can DoTadda Knowledge predict market reactions?
    • Yes, it uses predictive analytics to forecast potential market responses to earnings call disclosures.
  4. Is DoTadda Knowledge suitable for small firms?
    • Absolutely. Its scalability makes it suitable for firms of all sizes, from small businesses to large enterprises.
  5. What training is required to use DoTadda Knowledge?
    • Basic training on data analysis and the tool’s features is recommended to maximize its potential.
< section id="give-it-a-try" class="level1">

Give it a try!

Ready to step things up? Explore DoTadda Knowledge today and take your financial analysis to the next level. Share your feedback and experiences with us to help improve our guide!

< section id="bonus" class="level1">

Bonus

Try this prompt: “What is the best way to play higher octane spreads in the midwest? Timestamp your answers. Make a table with the observations. Then tell me if the higher octane spreads have enough impact to improve FCF for the companies you analyze?.”

< section id="references" class="level1">

References

  1. https://corporatefinanceinstitute.com/resources/valuation/earnings-call/

  2. https://www.tegus.com/knowledge-center/earnings-call-transcript

  3. https://www.needl.ai/blog/mastering-earnings-call-transcripts-a-comprehensive-guide

  4. https://www.blackrock.com/us/individual/insights/ai-investing

  5. https://www.marcumllp.com/insights/using-ai-to-prepare-for-an-earnings-call


Happy Coding! 🚀

Happy Summarizing

You can connect with me at any one of the below:

Telegram Channel here: https://t.me/steveondata

LinkedIn Network here: https://www.linkedin.com/in/spsanderson/

Mastadon Social here: https://mstdn.social/@stevensanderson

RStats Network here: https://rstats.me/@spsanderson

GitHub Network here: https://github.com/spsanderson

Bluesky Network here: https://bsky.app/profile/spsanderson.com

My Book: Extending Excel with Python and R here: https://packt.link/oTyZJ


To leave a comment for the author, please follow the link and comment on their blog: Steve's Data Tips and Tricks.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Exit mobile version