Skip to main content

ChatGroq

This will help you getting started with ChatGroq chat models. For detailed documentation of all ChatGroq features and configurations head to the API reference.

Overview​

Integration details​

ClassPackageLocalSerializablePY supportPackage downloadsPackage latest
ChatGroq@langchain/groqβŒβŒβœ…NPM - DownloadsNPM - Version

Model features​

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingToken usageLogprobs
βœ…βœ…βœ…βŒβŒβŒβœ…βœ…βœ…

Setup​

To access ChatGroq models you’ll need to create a ChatGroq account, get an API key, and install the @langchain/groq integration package.

Credentials​

In order to use the Groq API you’ll need an API key. You can sign up for a Groq account and create an API key here. Then, you can set the API key as an environment variable in your terminal:

export GROQ_API_KEY="your-api-key"

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

# export LANGCHAIN_TRACING_V2="true"
# export LANGCHAIN_API_KEY="your-api-key"

Installation​

The LangChain ChatGroq integration lives in the @langchain/groq package:

yarn add @langchain/groq

Instantiation​

Now we can instantiate our model object and generate chat completions:

import { ChatGroq } from "@langchain/groq";

const llm = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0,
maxTokens: undefined,
maxRetries: 2,
// other params...
});

Invocation​

const aiMsg = await llm.invoke([
[
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
],
["human", "I love programming."],
]);
aiMsg;
AIMessage {
"content": "I enjoy programming. (The French translation is: \"J'aime programmer.\")\n\nNote: I chose to translate \"I love programming\" as \"J'aime programmer\" instead of \"Je suis amoureux de programmer\" because the latter has a romantic connotation that is not present in the original English sentence.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 73,
"promptTokens": 31,
"totalTokens": 104
},
"finish_reason": "stop"
},
"tool_calls": [],
"invalid_tool_calls": []
}
console.log(aiMsg.content);
I enjoy programming. (The French translation is: "J'aime programmer.")

Note: I chose to translate "I love programming" as "J'aime programmer" instead of "Je suis amoureux de programmer" because the latter has a romantic connotation that is not present in the original English sentence.

Chaining​

We can chain our model with a prompt template like so:

import { ChatPromptTemplate } from "@langchain/core/prompts";

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
],
["human", "{input}"],
]);

const chain = prompt.pipe(llm);
await chain.invoke({
input_language: "English",
output_language: "German",
input: "I love programming.",
});
AIMessage {
"content": "That's great! I can help you translate English phrases related to programming into German.\n\n\"I love programming\" can be translated to German as \"Ich liebe Programmieren\".\n\nHere are some more programming-related phrases translated into German:\n\n* \"Programming language\" = \"Programmiersprache\"\n* \"Code\" = \"Code\"\n* \"Variable\" = \"Variable\"\n* \"Function\" = \"Funktion\"\n* \"Array\" = \"Array\"\n* \"Object-oriented programming\" = \"Objektorientierte Programmierung\"\n* \"Algorithm\" = \"Algorithmus\"\n* \"Data structure\" = \"Datenstruktur\"\n* \"Debugging\" = \"Debuggen\"\n* \"Compile\" = \"Kompilieren\"\n* \"Link\" = \"VerknΓΌpfen\"\n* \"Run\" = \"AusfΓΌhren\"\n* \"Test\" = \"Testen\"\n* \"Deploy\" = \"Bereitstellen\"\n* \"Version control\" = \"Versionskontrolle\"\n* \"Open source\" = \"Open Source\"\n* \"Software development\" = \"Softwareentwicklung\"\n* \"Agile methodology\" = \"Agile Methodik\"\n* \"DevOps\" = \"DevOps\"\n* \"Cloud computing\" = \"Cloud Computing\"\n\nI hope this helps! Let me know if you have any other questions or if you need further translations.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 327,
"promptTokens": 25,
"totalTokens": 352
},
"finish_reason": "stop"
},
"tool_calls": [],
"invalid_tool_calls": []
}

Tool calling​

Groq chat models support calling multiple functions to get all required data to answer a question. Here’s an example:

import { tool } from "@langchain/core/tools";
import { ChatGroq } from "@langchain/groq";
import { z } from "zod";

// Mocked out function, could be a database/API call in production
const getCurrentWeatherTool = tool(
(input) => {
if (input.location.toLowerCase().includes("tokyo")) {
return JSON.stringify({
location: input.location,
temperature: "10",
unit: "celsius",
});
} else if (input.location.toLowerCase().includes("san francisco")) {
return JSON.stringify({
location: input.location,
temperature: "72",
unit: "fahrenheit",
});
} else {
return JSON.stringify({
location: input.location,
temperature: "22",
unit: "celsius",
});
}
},
{
name: "get_current_weather",
description: "Get the current weather in a given location",
schema: z.object({
location: z
.string()
.describe("The city and state, e.g. San Francisco, CA"),
unit: z.enum(["celsius", "fahrenheit"]).optional(),
}),
}
);

// Bind function to the model as a tool
const llmWithTools = new ChatGroq({
model: "mixtral-8x7b-32768",
maxTokens: 128,
}).bindTools([getCurrentWeatherTool], {
tool_choice: "auto",
});

const resWithTools = await llmWithTools.invoke([
["human", "What's the weather like in San Francisco?"],
]);

console.dir(resWithTools.tool_calls, { depth: null });
[
{
name: 'get_current_weather',
args: { location: 'San Francisco', unit: 'fahrenheit' },
type: 'tool_call',
id: 'call_1mpy'
}
]

.withStructuredOutput({ ... })​

info

The .withStructuredOutput method is in beta. It is actively being worked on, so the API may change.

You can also use the .withStructuredOutput({ ... }) method to coerce ChatGroq into returning a structured output.

The method allows for passing in either a Zod object, or a valid JSON schema (like what is returned from zodToJsonSchema).

Using the method is simple. Just define your LLM and call .withStructuredOutput({ ... }) on it, passing the desired schema.

Here is an example using a Zod schema and the functionCalling mode (default mode):

import { ChatPromptTemplate } from "@langchain/core/prompts";
import { ChatGroq } from "@langchain/groq";
import { z } from "zod";

const calculatorSchema = z.object({
operation: z.enum(["add", "subtract", "multiply", "divide"]),
number1: z.number(),
number2: z.number(),
});

const llmForWSO = new ChatGroq({
temperature: 0,
model: "mixtral-8x7b-32768",
});
const modelWithStructuredOutput =
llmForWSO.withStructuredOutput(calculatorSchema);

const promptWSO = ChatPromptTemplate.fromMessages([
["system", "You are VERY bad at math and must always use a calculator."],
["human", "Please help me!! What is 2 + 2?"],
]);
const chainWSO = promptWSO.pipe(modelWithStructuredOutput);
const resultWSO = await chainWSO.invoke({});
console.log(resultWSO);
{ operation: 'add', number1: 2, number2: 2 }

You can also specify β€˜includeRaw’ to return the parsed and raw output in the result.

const includeRawModel = llmForWSO.withStructuredOutput(calculatorSchema, {
name: "calculator",
includeRaw: true,
});

const includeRawChain = promptWSO.pipe(includeRawModel);
const includeRawResult = await includeRawChain.invoke("");
console.dir(includeRawResult, { depth: null });
{
raw: AIMessage {
lc_serializable: true,
lc_kwargs: {
content: '',
additional_kwargs: {
tool_calls: [
{
id: 'call_7z1y',
type: 'function',
function: {
name: 'calculator',
arguments: '{"number1":2,"number2":2,"operation":"add"}'
}
}
]
},
tool_calls: [
{
name: 'calculator',
args: { number1: 2, number2: 2, operation: 'add' },
type: 'tool_call',
id: 'call_7z1y'
}
],
invalid_tool_calls: [],
response_metadata: {}
},
lc_namespace: [ 'langchain_core', 'messages' ],
content: '',
name: undefined,
additional_kwargs: {
tool_calls: [
{
id: 'call_7z1y',
type: 'function',
function: {
name: 'calculator',
arguments: '{"number1":2,"number2":2,"operation":"add"}'
}
}
]
},
response_metadata: {
tokenUsage: { completionTokens: 111, promptTokens: 1257, totalTokens: 1368 },
finish_reason: 'tool_calls'
},
id: undefined,
tool_calls: [
{
name: 'calculator',
args: { number1: 2, number2: 2, operation: 'add' },
type: 'tool_call',
id: 'call_7z1y'
}
],
invalid_tool_calls: [],
usage_metadata: undefined
},
parsed: { operation: 'add', number1: 2, number2: 2 }
}

Streaming​

Groq’s API also supports streaming token responses. The example below demonstrates how to use this feature.

import { ChatGroq } from "@langchain/groq";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const llmForStreaming = new ChatGroq({
apiKey: process.env.GROQ_API_KEY,
});
const promptForStreaming = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["human", "{input}"],
]);
const outputParserForStreaming = new StringOutputParser();
const chainForStreaming = promptForStreaming
.pipe(llmForStreaming)
.pipe(outputParserForStreaming);
const streamRes = await chainForStreaming.stream({
input: "Hello",
});
let streamedRes = "";
for await (const item of streamRes) {
streamedRes += item;
console.log("stream:", streamedRes);
}
stream:
stream: Hello
stream: Hello!
stream: Hello! I
stream: Hello! I'
stream: Hello! I'm
stream: Hello! I'm here
stream: Hello! I'm here to
stream: Hello! I'm here to help
stream: Hello! I'm here to help you
stream: Hello! I'm here to help you.
stream: Hello! I'm here to help you. Is
stream: Hello! I'm here to help you. Is there
stream: Hello! I'm here to help you. Is there something
stream: Hello! I'm here to help you. Is there something you
stream: Hello! I'm here to help you. Is there something you would
stream: Hello! I'm here to help you. Is there something you would like
stream: Hello! I'm here to help you. Is there something you would like to
stream: Hello! I'm here to help you. Is there something you would like to know
stream: Hello! I'm here to help you. Is there something you would like to know or
stream: Hello! I'm here to help you. Is there something you would like to know or a
stream: Hello! I'm here to help you. Is there something you would like to know or a task
stream: Hello! I'm here to help you. Is there something you would like to know or a task you
stream: Hello! I'm here to help you. Is there something you would like to know or a task you need
stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance
stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with
stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with?
stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please
stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel
stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel free
stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel free to
stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel free to ask
stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel free to ask me
stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel free to ask me anything
stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel free to ask me anything.
stream: Hello! I'm here to help you. Is there something you would like to know or a task you need assistance with? Please feel free to ask me anything.

API reference​

For detailed documentation of all ChatGroq features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_groq.ChatGroq.html


Was this page helpful?


You can also leave detailed feedback on GitHub.