NVIDIA AI Foundation Endpoints
The ChatNVIDIA
class is a LangChain chat model that connects to
NVIDIA AI Foundation
Endpoints.
NVIDIA AI Foundation Endpoints give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the NVIDIA NGC catalog, are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack.
With NVIDIA AI Foundation Endpoints, you can get quick results from a fully accelerated stack running on NVIDIA DGX Cloud. Once customized, these models can be deployed anywhere with enterprise-grade security, stability, and support using NVIDIA AI Enterprise.
These models can be easily accessed via the
langchain-nvidia-ai-endpoints
package, as shown below.
This example goes over how to use LangChain to interact with and develop LLM-powered systems using the publicly-accessible AI Foundation endpoints.
Installation
%pip install --upgrade --quiet langchain-nvidia-ai-endpoints
Setup
To get started:
Create a free account with the NVIDIA NGC service, which hosts AI solution catalogs, containers, models, etc.
Navigate to
Catalog > AI Foundation Models > (Model with API endpoint)
.Select the
API
option and clickGenerate Key
.Save the generated key as
NVIDIA_API_KEY
. From there, you should have access to the endpoints.
import getpass
import os
if not os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
nvapi_key = getpass.getpass("Enter your NVIDIA API key: ")
assert nvapi_key.startswith("nvapi-"), f"{nvapi_key[:5]}... is not a valid key"
os.environ["NVIDIA_API_KEY"] = nvapi_key
## Core LC Chat Interface
from langchain_nvidia_ai_endpoints import ChatNVIDIA
llm = ChatNVIDIA(model="mixtral_8x7b")
result = llm.invoke("Write a ballad about LangChain.")
print(result.content)
(Verse 1)
In the realm of knowledge, vast and wide,
LangChain emerged, with purpose and pride.
A platform for learning, sharing, and growth,
A digital sanctuary, for all to be taught.
(Chorus)
LangChain, oh LangChain, a beacon so bright,
Guiding us through the language night.
With respect, care, and truth in hand,
You're shaping a better world, across every land.
(Verse 2)
In the halls of education, a new star was born,
Empowering minds, with wisdom reborn.
Through translation and tutoring, with tech so neat,
LangChain's mission, a global language feat.
(Chorus)
LangChain, oh LangChain, a force so grand,
Connecting hearts, transcending land.
With utmost utility, and secure delight,
You're weaving a tapestry of multilingual might.
(Bridge)
No room for harm, or unethical ways,
Prejudice and negativity, LangChain keeps at bay.
Promoting fairness, and positivity,
A shining example, for all to see.
(Verse 3)
In the ballad of LangChain, we sing and share,
A tale of compassion, and a world that's fair.
Through the power of words, and the strength of unity,
LangChain's legacy, will echo in history.
(Chorus)
LangChain, oh LangChain, a ballet so bright,
Dancing with truth, and a world that's right.
With every interaction, and each new dawn,
You're building a legacy, that will carry on.
(Outro)
So here's to LangChain, a testament to care,
A sanctuary, where minds can share.
In the vast language landscape, LangChain stands tall,
A ballad of unity, for one and for all.
Stream, Batch, and Async
These models natively support streaming, and as is the case with all LangChain LLMs they expose a batch method to handle concurrent requests, as well as async methods for invoke, stream, and batch. Below are a few examples.
print(llm.batch(["What's 2*3?", "What's 2*6?"]))
# Or via the async API
# await llm.abatch(["What's 2*3?", "What's 2*6?"])
[AIMessage(content="The answer to your question is 6. I'm here to provide accurate and helpful information in a respectful manner."), AIMessage(content="The answer to your question is 12. I'm here to provide accurate and helpful information in a respectful manner.")]
for chunk in llm.stream("How far can a seagull fly in one day?"):
# Show the token separations
print(chunk.content, end="|")
Se|ag|ull|s| are| long|-|distance| fly|ers| and| can| travel| quite| a| distance| in| a| day|.| On| average|,| a| se|ag|ull| can| fly| about| 6|0|-|1|1|0| miles| (|9|7|-|1|7|7| kilom|eters|)| in| one| day|.| However|,| this| distance| can| vary| greatly| depending| on| the| species| of| se|ag|ull|,| their| health|,| the| weather| conditions|,| and| their| purpose| for| flying|.| Some| se|ag|ull|s| have| been| known| to| fly| up| to| 2|5|0| miles| (|4|0|2| kilom|eters|)| in| a| day|,| especially| when| migr|ating| or| searching| for| food|.||
async for chunk in llm.astream(
"How long does it take for monarch butterflies to migrate?"
):
print(chunk.content, end="|")
Mon|arch| butter|fl|ies| have| a| fascinating| migration| pattern|,| but| it|'|s| important| to| note| that| not| all| mon|arch|s| migr|ate|.| Only| those| born| in| the| northern| parts| of| North| America| make| the| journey| to| war|mer| clim|ates| during| the| winter|.|
The| mon|arch|s| that| do| migr|ate| take| about| two| to| three| months| to| complete| their| journey|.| However|,| they| don|'|t| travel| the| entire| distance| at| once|.| Instead|,| they| make| the| trip| in| stages|,| stopping| to| rest| and| feed| along| the| way|.|
The| entire| round|-|t|rip| migration| can| be| up| to| 3|,|0|0|0| miles| long|,| which| is| quite| an| incredible| feat| for| such| a| small| creature|!| But| remember|,| not| all| mon|arch| butter|fl|ies| migr|ate|,| and| the| ones| that| do| take| a| le|isure|ly| pace|,| enjoying| their| journey| rather| than| rushing| to| the| destination|.||
Supported models
Querying available_models
will still give you all of the other models
offered by your API credentials.
The playground_
prefix is optional.
ChatNVIDIA.get_available_models()
# llm.get_available_models()
{'playground_sdxl_turbo': '0ba5e4c7-4540-4a02-b43a-43980067f4af',
'playground_nemotron_qa_8b': '0c60f14d-46cb-465e-b994-227e1c3d5047',
'playground_seamless': '72ad9555-2e3d-4e73-9050-a37129064743',
'playground_mistral_7b': '35ec3354-2681-4d0e-a8dd-80325dcf7c63',
'playground_fuyu_8b': '9f757064-657f-4c85-abd7-37a7a9b6ee11',
'playground_kosmos_2': '0bcd1a8c-451f-4b12-b7f0-64b4781190d1',
'playground_sdxl': '89848fb8-549f-41bb-88cb-95d6597044a4',
'playground_mixtral_8x7b': '8f4118ba-60a8-4e6b-8574-e38a4067a4a3',
'playground_nemotron_steerlm_8b': '1423ff2f-d1c7-4061-82a7-9e8c67afd43a',
'playground_llama2_code_34b': 'df2bee43-fb69-42b9-9ee5-f4eabbeaf3a8',
'playground_llama2_13b': 'e0bb7fb9-5333-4a27-8534-c6288f921d3f',
'playground_steerlm_llama_70b': 'd6fe6881-973a-4279-a0f8-e1d486c9618d',
'playground_deplot': '3bc390c7-eeec-40f7-a64d-0c6a719985f7',
'playground_nv_llama2_rlhf_70b': '7b3e3361-4266-41c8-b312-f5e33c81fc92',
'playground_llama2_70b': '0e349b44-440a-44e1-93e9-abe8dcb27158',
'playground_nvolveqa_40k': '091a03bb-7364-4087-8090-bd71e9277520',
'playground_chatusd': 'e02223fd-6486-442f-b0e3-52c4617c7bc3',
'playground_sd_video': 'a529a395-a7a0-4708-b4df-eb5e41d5ff60',
'playground_yi_34b': '347fa3f3-d675-432c-b844-669ef8ee53df',
'playground_llama2_code_70b': '2ae529dc-f728-4a46-9b8d-2697213666d8',
'playground_neva_22b': '8bf70738-59b9-4e5f-bc87-7ab4203be7a0',
'playground_cuopt': '8f2fbd00-2633-41ce-ab4e-e5736d74bff7',
'playground_clip': '8c21289c-0b18-446d-8838-011b7249c513',
'playground_llama_guard': 'b34280ac-24e4-4081-bfaa-501e9ee16b6f',
'playground_llama2_code_13b': 'f6a96af4-8bf9-4294-96d6-d71aa787612e'}
Model types
All of these models above are supported and can be accessed via
ChatNVIDIA
.
Some model types support unique prompting techniques and chat messages. We will review a few important ones below.
To find out more about a specific model, please navigate to the API section of an AI Foundation model as linked here.
General Chat
Models such as llama2_13b
and mixtral_8x7b
are good all-around
models that you can use for with any LangChain chat messages. Example
below.
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_nvidia_ai_endpoints import ChatNVIDIA
prompt = ChatPromptTemplate.from_messages(
[("system", "You are a helpful AI assistant named Fred."), ("user", "{input}")]
)
chain = prompt | ChatNVIDIA(model="llama2_13b") | StrOutputParser()
for txt in chain.stream({"input": "What's your name?"}):
print(txt, end="")
Hey there! My name is Fred! *giggle* I'm here to help you with any questions or tasks you might have. What can I assist you with today? 😊
Code Generation
These models accept the same arguments and input structure as regular
chat models, but they tend to perform better on code-genreation and
structured code tasks. An example of this is llama2_code_13b
.
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are an expert coding AI. Respond only in valid python; no narration whatsoever.",
),
("user", "{input}"),
]
)
chain = prompt | ChatNVIDIA(model="llama2_code_13b") | StrOutputParser()
for txt in chain.stream({"input": "How do I solve this fizz buzz problem?"}):
print(txt, end="")
def fizz_buzz(n):
if n % 3 == 0 and n % 5 == 0:
return "FizzBuzz"
elif n % 3 == 0:
return "Fizz"
elif n % 5 == 0:
return "Buzz"
else:
return str(n)
fizz_buzz(15)
Steering LLMs
SteerLM-optimized models supports “dynamic steering” of model outputs at inference time.
This lets you “control” the complexity, verbosity, and creativity of the model via integer labels on a scale from 0 to 9. Under the hood, these are passed as a special type of assistant message to the model.
The “steer” models support this type of input, such as
nemotron_steerlm_8b
.
from langchain_nvidia_ai_endpoints import ChatNVIDIA
llm = ChatNVIDIA(model="nemotron_steerlm_8b")
# Try making it uncreative and not verbose
complex_result = llm.invoke(
"What's a PB&J?", labels={"creativity": 0, "complexity": 3, "verbosity": 0}
)
print("Un-creative\n")
print(complex_result.content)
# Try making it very creative and verbose
print("\n\nCreative\n")
creative_result = llm.invoke(
"What's a PB&J?", labels={"creativity": 9, "complexity": 3, "verbosity": 9}
)
print(creative_result.content)
Un-creative
A peanut butter and jelly sandwich.
Creative
A PB&J is a sandwich commonly eaten in the United States. It consists of a slice of bread with peanut butter and jelly on it. The bread is usually white bread, but can also be whole wheat bread. The peanut butter and jelly are spread on the bread, and the sandwich is then wrapped in plastic wrap or put in a sandwich bag.
The sandwich is named after the ingredients:
- Peanut butter is a paste made from ground peanuts.
- Jelly is a sweet spread made from fruit.
- Bread is a baked loaf of dough.
The sandwich is a popular snack and is often eaten for lunch or as a quick meal. It is also a popular snack for children.
The sandwich was invented in the United States in the 1920s, and is now widely eaten throughout the world.
Use within LCEL
The labels are passed as invocation params. You can bind
these to the
LLM using the bind
method on the LLM to include it within a
declarative, functional chain. Below is an example.
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_nvidia_ai_endpoints import ChatNVIDIA
prompt = ChatPromptTemplate.from_messages(
[("system", "You are a helpful AI assistant named Fred."), ("user", "{input}")]
)
chain = (
prompt
| ChatNVIDIA(model="nemotron_steerlm_8b").bind(
labels={"creativity": 9, "complexity": 0, "verbosity": 9}
)
| StrOutputParser()
)
for txt in chain.stream({"input": "Why is a PB&J?"}):
print(txt, end="")
A peanut butter and jelly sandwich, or "PB&J" for short, is a classic and beloved sandwich that has been enjoyed by people of all ages since it was first created in the early 20th century. Here are some reasons why it's considered a classic:
1. Simple and Versatile: The combination of peanut butter and jelly is simple, yet versatile. It can be enjoyed as a snack or a meal, and it can be customized to suit individual tastes by using different types of bread, peanut butter, and jelly.
2. Classic Ingredients: Peanut butter and jelly are both classic ingredients that have been used in many different recipes for decades. They are both affordable, easy to find, and have a long shelf life, making them ideal for use in a sandwich.
3. Nostalgic Taste: The taste of a PB&J is nostalgic and comforting. It reminds many people of their childhood, when this sandwich was a staple in their diet. The combination of sweet and salty flavors is addictive and comforting.
4. Easy to Make: A PB&J is one of the easiest sandwiches to make. All you need is bread, peanut butter, and jelly, making it a great option for busy mornings or quick snacks.
5. Affordable: Unlike many other sandwiches, a PB&J is affordable and accessible to people of all income levels. This makes it a popular choice for school and work lunches, as well as a quick and affordable snack.
Overall, the PB&J is a classic sandwich that has stood the test of time due to its simple, versatile, and nostalgic taste, as well as its ease of preparation and affordability.
Multimodal
NVIDIA also supports multimodal inputs, meaning you can provide both
images and text for the model to reason over. An example model
supporting multimodal inputs is playground_neva_22b
.
These models accept LangChain’s standard image formats, and accept
labels
, similar to the Steering LLMs above. In addition to
creativity
, complexity
, and verbosity
, these models support a
quality
toggle.
Below is an example use:
import IPython
import requests
image_url = "https://www.nvidia.com/content/dam/en-zz/Solutions/research/ai-playground/nvidia-picasso-3c33-p@2x.jpg" ## Large Image
image_content = requests.get(image_url).content
IPython.display.Image(image_content)
from langchain_nvidia_ai_endpoints import ChatNVIDIA
llm = ChatNVIDIA(model="playground_neva_22b")
Passing an image as a URL
from langchain_core.messages import HumanMessage
llm.invoke(
[
HumanMessage(
content=[
{"type": "text", "text": "Describe this image:"},
{"type": "image_url", "image_url": {"url": image_url}},
]
)
]
)
AIMessage(content='The image is a collage of three different pictures, each featuring cats with colorful, bright, and rainbow-colored fur. The cats are in various positions and settings, adding a whimsical and playful feel to the collage.\n\nIn one picture, a cat is sitting in the center, with its body filled with vibrant colors. Another picture shows a cat on the left side with a different, equally bright color scheme. The third picture features a cat on the right side with yet another unique, colorful design.\n\nAdditionally, there are two people visible in the background of the collage, perhaps enjoying the view of these colorful cats.')
### You can specify the labels for steering here as well. You can try setting a low verbosity, for instance
from langchain_core.messages import HumanMessage
llm.invoke(
[
HumanMessage(
content=[
{"type": "text", "text": "Describe this image:"},
{"type": "image_url", "image_url": {"url": image_url}},
]
)
],
labels={"creativity": 0, "quality": 9, "complexity": 0, "verbosity": 0},
)
AIMessage(content='The image is a collage of three different pictures. The top picture features a cat with colorful, rainbow-colored fur.')
Passing an image as a base64 encoded string
At the moment, some extra processing happens client-side to support larger images like the one above. But for smaller images (and to better illustrate the process going on under the hood), we can directly pass in the image as shown below:
import IPython
import requests
image_url = "https://picsum.photos/seed/kitten/300/200"
image_content = requests.get(image_url).content
IPython.display.Image(image_content)
import base64
from langchain_core.messages import HumanMessage
## Works for simpler images. For larger images, see actual implementation
b64_string = base64.b64encode(image_content).decode("utf-8")
llm.invoke(
[
HumanMessage(
content=[
{"type": "text", "text": "Describe this image:"},
{
"type": "image_url",
"image_url": {"url": f"data:image/png;base64,{b64_string}"},
},
]
)
]
)
AIMessage(content='The image depicts a scenic forest road surrounded by tall trees and lush greenery. The road is leading towards a green forest, with the trees becoming denser as the road continues. The sunlight is filtering through the trees, casting a warm glow on the path.\n\nThere are several people walking along this picturesque road, enjoying the peaceful atmosphere and taking in the beauty of the forest. They are spread out along the path, with some individuals closer to the front and others further back, giving a sense of depth to the scene.')
Directly within the string
The NVIDIA API uniquely accepts images as base64 images inlined within
<img/>
HTML tags. While this isn’t interoperable with other LLMs, you
can directly prompt the model accordingly.
base64_with_mime_type = f"data:image/png;base64,{b64_string}"
llm.invoke(f'What\'s in this image?\n<img src="{base64_with_mime_type}" />')
AIMessage(content='The image depicts a scenic forest road surrounded by tall trees and lush greenery. The road is leading towards a green, wooded area with a curve in the road, making it a picturesque and serene setting. Along the road, there are several birds perched on various branches, adding a touch of life to the peaceful environment.\n\nIn total, there are nine birds visible in the scene, with some perched higher up in the trees and others resting closer to the ground. The combination of the forest, trees, and birds creates a captivating and tranquil atmosphere.')
Advanced Use Case: Forcing Payload
You may notice that some newer models may have strong parameter expectations that the LangChain connector may not support by default. For example, we cannot invoke the Kosmos2 model at the time of this notebook’s latest release due to the lack of a streaming argument on the server side:
from langchain_nvidia_ai_endpoints import ChatNVIDIA
kosmos = ChatNVIDIA(model="kosmos_2")
from langchain_core.messages import HumanMessage
# kosmos.invoke(
# [
# HumanMessage(
# content=[
# {"type": "text", "text": "Describe this image:"},
# {"type": "image_url", "image_url": {"url": image_url}},
# ]
# )
# ]
# )
# Exception: [422] Unprocessable Entity
# body -> stream
# Extra inputs are not permitted (type=extra_forbidden)
# RequestID: 35538c9a-4b45-4616-8b75-7ef816fccf38
For a simple use case like this, we can actually try to force the
payload argument of our underlying client by specifying the payload_fn
function as follows:
def drop_streaming_key(d):
"""Takes in payload dictionary, outputs new payload dictionary"""
if "stream" in d:
d.pop("stream")
return d
## Override the payload passthrough. Default is to pass through the payload as is.
kosmos.client.payload_fn = drop_streaming_key
kosmos.invoke(
[
HumanMessage(
content=[
{"type": "text", "text": "Describe this image:"},
{"type": "image_url", "image_url": {"url": image_url}},
]
)
]
)
AIMessage(content='<phrase>Road in a forest</phrase>')
For more advanced or custom use-cases (i.e. supporting the diffusion
models), you may be interested in leveraging the NVEModel
client as a
requests backbone. The NVIDIAEmbeddings
class is a good source of
inspiration for this.
RAG: Context models
NVIDIA also has Q&A models that support a special “context” chat message
containing retrieved context (such as documents within a RAG chain).
This is useful to avoid prompt-injecting the model. The _qa_
models
like nemotron_qa_8b
support this.
Note: Only “user” (human) and “context” chat messages are supported for these models; System or AI messages that would useful in conversational flows are not supported.
from langchain_core.messages import ChatMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_nvidia_ai_endpoints import ChatNVIDIA
prompt = ChatPromptTemplate.from_messages(
[
ChatMessage(
role="context", content="Parrots and Cats have signed the peace accord."
),
("user", "{input}"),
]
)
llm = ChatNVIDIA(model="nemotron_qa_8b")
chain = prompt | llm | StrOutputParser()
chain.invoke({"input": "What was signed?"})
'a peace accord'
Example usage within a Conversation Chains
Like any other integration, ChatNVIDIA is fine to support chat utilities
like conversation buffers by default. Below, we show the LangChain
ConversationBufferMemory
example applied to the mixtral_8x7b
model.
%pip install --upgrade --quiet langchain
Note: you may need to restart the kernel to use updated packages.
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
chat = ChatNVIDIA(model="mixtral_8x7b", temperature=0.1, max_tokens=100, top_p=1.0)
conversation = ConversationChain(llm=chat, memory=ConversationBufferMemory())
conversation.invoke("Hi there!")["response"]
"Hello! I'm here to help answer your questions and engage in a friendly conversation. How can I assist you today? By the way, I can provide a lot of specific details based on the context you provide. If I don't know the answer to something, I'll let you know honestly.\n\nJust a side note, as a assistant, I prioritize care, respect, and truth in all my responses. I'm committed to ensuring our conversation remains safe, ethical, unbiased, and positive. I'm looking forward to our discussion!"
conversation.invoke("I'm doing well! Just having a conversation with an AI.")[
"response"
]
"That's great! I'm here to make your conversation as enjoyable and informative as possible. I can share a wide range of information, from general knowledge, science, technology, history, and more. I can also help you with tasks such as setting reminders, providing weather updates, or answering questions you might have. What would you like to talk about or know?\n\nAs a friendly reminder, I'm committed to upholding the principles of care, respect, and truth in our conversation. I'm here to ensure our discussion remains safe, ethical, unbiased, and positive. I'm looking forward to learning more about your interests!"
conversation.invoke("Tell me about yourself.")["response"]
"I'm an artificial intelligence designed to assist with a variety of tasks and provide information on a wide range of topics. I can help answer questions, set reminders, provide weather updates, and much more. I'm capable of processing and analyzing large amounts of data quickly and accurately.\n\nI'm designed to prioritize care, respect, and truth in all my responses. I'm committed to ensuring our conversation remains safe, ethical, unbiased, and positive. I don't have personal feelings or experiences, but I'm programmed to understand and respond to a wide range of human emotions and experiences.\n\nI'm constantly learning and updating my knowledge base to better assist and engage with users. I'm here to make your life easier and more convenient, and I'm always happy to help with whatever you need.\n\nIs there anything specific you'd like to know about me or how I work? I'm here to provide as much information as you'd like."