Build a powerful AI to quickly read NVDA's financials
Build a powerful AI to quickly read NVDA's financials
I’m not a CFO. Or a CPA. And analyzing financial statements is… boring.
Unfortunately for people like me, it's important to get fundamental analysis right.
Otherwise, investors buy overpriced assets and lose money. Legendary investors like Warren Buffet famously used to spend hours reading quarterly reports.
I don’t know about you, but I think that sounds terrible.
During my career, I learned firsthand the pain of sifting through endless financial data. The process was time-consuming and prone to mistakes.
But it’s 2024 and there's a better way:
using Python to leverage AI for financial analysis.
By reading today's newsletter, you'll analyze Nvidia’s latest financial filings using LlamaIndex and OpenAI’s GPT-4o.
Let's go!
Build a powerful AI to quickly read NVDA's financials
Generative AI is transforming financial statement analysis. Traditionally, experts like Warren Buffett spent hours analyzing financial statements manually. Now, AI can process hundred page documents quickly and accurately.
The game is changed.
Generative AI large language models to extract insights from textual data. Combining those insights with other Python tools turns documents into code. This helps investors focus on strategy and decision-making rather than data crunching.
Hedge funds and big asset managers are using tools like LlamaIndex to crunch their numbers.
LlamaIndex is a tool designed for building, managing, and querying large language models using external data sources like documents, databases, and APIs. It is a flexible framework to integrate LLMs into various applications by providing efficient data indexing, retrieval, and query handling.
Let's see how it works.
Imports and set up
Start by importing LlamaIndex. These are tools to build Python applications with LLMs. You’ll need to install llama-index and pydf from pip.
1from llama_index.llms.openai import OpenAI
2
3from llama_index.core import (
4 StorageContext,
5 VectorStoreIndex,
6 SimpleDirectoryReader,
7 load_index_from_storage,
8)
9
10from llama_index.core.tools import QueryEngineTool, ToolMetadata
11from llama_index.core.query_engine import SubQuestionQueryEngine
12from dotenv import load_dotenv
13
14load_dotenv()
I store my OpenAI API keys in a .env file and read it using the dotenv library. This is best practice. You can download the PDF I used for the analysis here. Just rename the file to nvda.pdf.
Configure the language model and load the document
First, we configure the language model with specific parameters and load the document.
1llm = ChatOpenAI(temperature=0, model_name="gpt-4o")
2
3doc = SimpleDirectoryReader(input_files=["nvda.pdf"]).load_data()
4print(f"Loaded NVDA 10-K with {len(doc)} pages")
We set the language model to use the GPT-4 model with a temperature of 0 for deterministic responses. The model is configured to use an unlimited number of tokens. We then load the NVDA 10-K document from a PDF file and print the number of pages loaded.
Create an index to enable querying of the document
Next, we create an index from the loaded document to facilitate efficient querying.
1index = VectorStoreIndex.from_documents(doc)
2engine = index.as_query_engine(similarity_top_k=3)
We create a VectorStoreIndex from the loaded document, which enables us to perform similarity searches. We then set up a query engine with a similarity search parameter to return the top 3 most relevant results for each query.
Query specific financial information from the document
Now, we can use the query engine to extract specific financial information from the document. We use the query engine to asynchronously ask questions about NVIDIA's financial report.
1response = await engine.aquery("What is the revenue of NVDIA in the last period reported? Answer in millions with page reference. Include the period.")
2print(response)
This correctly prints out the total revenue in millions of dollars.
1response = await engine.aquery("What is the beginning and end date of NVIDA's fiscal period?")
2print(response)
Again, the engine returns the correct response.
The first query asks for the revenue in the last reported period, including the page reference. The second query asks for the beginning and end dates of NVIDIA's fiscal period. The responses are printed to the console.
Set up a tool for sub-question querying
We will now set up a tool to handle more complex queries by breaking them down into sub-questions.
1query_engine_tool = [
2 QueryEngineTool(
3 query_engine=engine,
4 metadata=ToolMetadata(
5 name='nvda_10k',
6 description='Provides information about NVDA financials for year 2024'
7 )
8 )
9]
10s_engine = SubQuestionQueryEngine.from_defaults(
11 query_engine_tools=query_engine_tool
12)
We create a list of QueryEngineTool objects with metadata describing the tool's function. We then initialize a SubQuestionQueryEngine with the list of tools. This engine can break down complex queries into smaller, more manageable sub-questions.
Perform complex queries on customer segments and risks
Finally, we perform more complex queries on the document to extract detailed information about customer segments and business risks.
1response = await s_engine.aquery("Compare and contrast the customer segments and geographies that grew the fastest")
2print(response)
3
4response = await s_engine.aquery("What risks to NVDIA's business are highlighted in the document?")
5print(response)
6
7response = await s_engine.aquery("How does NVDIA see the risks highlighted in the document impacting financial performance?")
8print(response)
We use the sub-question query engine to ask complex questions about NVIDIA's customer segments and geographies and the business risks highlighted in the document. The engine breaks these questions into smaller sub-questions, processes them, and compiles the responses. Each response is then printed to the console.
Your next steps
Try changing the queries to extract different types of financial information from the document. Experiment with different parameters for the language model to see how it affects the responses. Customize the metadata for the query tools to better match your specific use case.