Amazon SDK for JavaScript V3 API 参考指南详细描述了 Amazon SDK for JavaScript 版本 3 (V3) 的所有 API 操作。
本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。
使用适用于 JavaScript (v3) 的 SDK 的亚马逊 Bedrock 运行时示例
以下代码示例向您展示了如何使用带有 Amazon Bedrock Runtime 的 Amazon SDK for JavaScript (v3) 来执行操作和实现常见场景。
操作是大型程序的代码摘录,必须在上下文中运行。您可以通过操作了解如何调用单个服务函数,还可以通过函数相关场景和跨服务示例的上下文查看操作。
场景 是展示如何通过在同一服务中调用多个函数来完成特定任务的代码示例。
每个示例都包含一个指向的链接 GitHub,您可以在其中找到有关如何在上下文中设置和运行代码的说明。
主题
操作
以下代码示例演示了如何通过在 Amazon Bedrock 上调用 AI21 Labs Jurassic-2 模型来生成文本。
- 适用于 JavaScript (v3) 的软件开发工具包
-
注意
还有更多相关信息 GitHub。在 Amazon 代码示例存储库
中查找完整示例,了解如何进行设置和运行。 调用 AI21 Labs Jurassic-2 基础模型以生成文本。
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { fileURLToPath } from "url"; import { AccessDeniedException, BedrockRuntimeClient, InvokeModelCommand, } from "@aws-sdk/client-bedrock-runtime"; /** * @typedef {Object} Data * @property {string} text * * @typedef {Object} Completion * @property {Data} data * * @typedef {Object} ResponseBody * @property {Completion[]} completions */ /** * Invokes the AI21 Labs Jurassic-2 large-language model to run an inference * using the input provided in the request body. * * @param {string} prompt - The prompt that you want Jurassic-2 to complete. * @returns {string} The inference response (completion) from the model. */ export const invokeJurassic2 = async (prompt) => { const client = new BedrockRuntimeClient({ region: "us-east-1" }); const modelId = "ai21.j2-mid-v1"; /* The different model providers have individual request and response formats. * For the format, ranges, and default values for AI21 Labs Jurassic-2, refer to: * https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-jurassic2.html */ const payload = { prompt, maxTokens: 500, temperature: 0.5, }; const command = new InvokeModelCommand({ body: JSON.stringify(payload), contentType: "application/json", accept: "application/json", modelId, }); try { const response = await client.send(command); const decodedResponseBody = new TextDecoder().decode(response.body); /** @type {ResponseBody} */ const responseBody = JSON.parse(decodedResponseBody); return responseBody.completions[0].data.text; } catch (err) { if (err instanceof AccessDeniedException) { console.error( `Access denied. Ensure you have the correct permissions to invoke ${modelId}.`, ); } else { throw err; } } }; // Invoke the function if this file was run directly. if (process.argv[1] === fileURLToPath(import.meta.url)) { const prompt = 'Complete the following: "Once upon a time..."'; console.log("\nModel: AI21 Labs Jurassic-2"); console.log(`Prompt: ${prompt}`); const completion = await invokeJurassic2(prompt); console.log("Completion:"); console.log(completion); console.log("\n"); }
-
有关 API 的详细信息,请参阅 Amazon SDK for JavaScript API 参考InvokeModel中的。
-
以下代码示例显示了如何在 Amazon Bedrock 上调用 Amazon Titan Text G1 模型来生成文本。
- 适用于 JavaScript (v3) 的软件开发工具包
-
注意
还有更多相关信息 GitHub。在 Amazon 代码示例存储库
中查找完整示例,了解如何进行设置和运行。 调用 Amazon Titan Text G1 基础模型来生成文本。
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { fileURLToPath } from "url"; import { AccessDeniedException, BedrockRuntimeClient, InvokeModelCommand, } from "@aws-sdk/client-bedrock-runtime"; /** * @typedef {Object} ResponseBody * @property {Object[]} results */ /** * Invokes the Titan Text G1 - Express model to run an inference * using the input provided in the request body. * * @param {string} prompt - The prompt that you want Titan Text Express to complete. * @returns {object[]} The inference response (results) from the model. */ export const invokeTitanTextExpressV1 = async (prompt) => { const client = new BedrockRuntimeClient({ region: "us-east-1" }); const modelId = "amazon.titan-text-express-v1"; /* The different model providers have individual request and response formats. * For the format, ranges, and default values for Titan text, refer to: * https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-titan-text.html */ const textGenerationConfig = { maxTokenCount: 4096, stopSequences: [], temperature: 0, topP: 1, }; const payload = { inputText: prompt, textGenerationConfig, }; const command = new InvokeModelCommand({ body: JSON.stringify(payload), contentType: "application/json", accept: "application/json", modelId, }); try { const response = await client.send(command); const decodedResponseBody = new TextDecoder().decode(response.body); /** @type {ResponseBody} */ const responseBody = JSON.parse(decodedResponseBody); return responseBody.results; } catch (err) { if (err instanceof AccessDeniedException) { console.error( `Access denied. Ensure you have the correct permissions to invoke ${modelId}.`, ); } else { throw err; } } }; // Invoke the function if this file was run directly. if (process.argv[1] === fileURLToPath(import.meta.url)) { const prompt = `Meeting transcript: Miguel: Hi Brant, I want to discuss the workstream for our new product launch Brant: Sure Miguel, is there anything in particular you want to discuss? Miguel: Yes, I want to talk about how users enter into the product. Brant: Ok, in that case let me add in Namita. Namita: Hey everyone Brant: Hi Namita, Miguel wants to discuss how users enter into the product. Miguel: its too complicated and we should remove friction. for example, why do I need to fill out additional forms? I also find it difficult to find where to access the product when I first land on the landing page. Brant: I would also add that I think there are too many steps. Namita: Ok, I can work on the landing page to make the product more discoverable but brant can you work on the additonal forms? Brant: Yes but I would need to work with James from another team as he needs to unblock the sign up workflow. Miguel can you document any other concerns so that I can discuss with James only once? Miguel: Sure. From the meeting transcript above, Create a list of action items for each person.`; console.log("\nModel: Titan Text Express v1"); console.log(`Prompt: ${prompt}`); const results = await invokeTitanTextExpressV1(prompt); console.log("Completion:"); for (const result of results) { console.log(result.outputText); } console.log("\n"); }
-
有关 API 的详细信息,请参阅 Amazon SDK for JavaScript API 参考InvokeModel中的。
-
以下代码示例演示了如何通过在 Amazon Bedrock 上调用 Anthropic Claude 2 模型来生成文本。
- 适用于 JavaScript (v3) 的软件开发工具包
-
注意
还有更多相关信息 GitHub。在 Amazon 代码示例存储库
中查找完整示例,了解如何进行设置和运行。 调用 Anthropic Claude 2 基础模型以生成文本。
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { fileURLToPath } from "url"; import { AccessDeniedException, BedrockRuntimeClient, InvokeModelCommand, } from "@aws-sdk/client-bedrock-runtime"; /** * @typedef {Object} ResponseBody * @property {string} completion */ /** * Invokes the Anthropic Claude 2 model to run an inference using the input * provided in the request body. * * @param {string} prompt - The prompt that you want Claude to complete. * @returns {string} The inference response (completion) from the model. */ export const invokeClaude = async (prompt) => { const client = new BedrockRuntimeClient({ region: "us-east-1" }); const modelId = "anthropic.claude-v2"; /* Claude requires you to enclose the prompt as follows: */ const enclosedPrompt = `Human: ${prompt}\n\nAssistant:`; /* The different model providers have individual request and response formats. * For the format, ranges, and default values for Anthropic Claude, refer to: * https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-claude.html */ const payload = { prompt: enclosedPrompt, max_tokens_to_sample: 500, temperature: 0.5, stop_sequences: ["\n\nHuman:"], }; const command = new InvokeModelCommand({ body: JSON.stringify(payload), contentType: "application/json", accept: "application/json", modelId, }); try { const response = await client.send(command); const decodedResponseBody = new TextDecoder().decode(response.body); /** @type {ResponseBody} */ const responseBody = JSON.parse(decodedResponseBody); return responseBody.completion; } catch (err) { if (err instanceof AccessDeniedException) { console.error( `Access denied. Ensure you have the correct permissions to invoke ${modelId}.`, ); } else { throw err; } } }; // Invoke the function if this file was run directly. if (process.argv[1] === fileURLToPath(import.meta.url)) { const prompt = 'Complete the following: "Once upon a time..."'; console.log("\nModel: Anthropic Claude v2"); console.log(`Prompt: ${prompt}`); const completion = await invokeClaude(prompt); console.log("Completion:"); console.log(completion); console.log("\n"); }
-
有关 API 的详细信息,请参阅 Amazon SDK for JavaScript API 参考InvokeModel中的。
-
以下代码示例演示了如何通过在 Amazon Bedrock 上调用 Meta Llama 2 Chat 模型来生成文本。
- 适用于 JavaScript (v3) 的软件开发工具包
-
注意
还有更多相关信息 GitHub。在 Amazon 代码示例存储库
中查找完整示例,了解如何进行设置和运行。 调用 Meta Llama 2 Chat 基础模型以生成文本。
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { fileURLToPath } from "url"; import { AccessDeniedException, BedrockRuntimeClient, InvokeModelCommand, } from "@aws-sdk/client-bedrock-runtime"; /** * @typedef {Object} ResponseBody * @property {generation} text */ /** * Invokes the Meta Llama 2 Chat model to run an inference * using the input provided in the request body. * * @param {string} prompt - The prompt that you want Llama-2 to complete. * @returns {string} The inference response (generation) from the model. */ export const invokeLlama2 = async (prompt) => { const client = new BedrockRuntimeClient({ region: "us-east-1" }); const modelId = "meta.llama2-13b-chat-v1"; /* The different model providers have individual request and response formats. * For the format, ranges, and default values for Meta Llama 2 Chat, refer to: * https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-meta.html */ const payload = { prompt, temperature: 0.5, top_p: 0.9, max_gen_len: 512, }; const command = new InvokeModelCommand({ body: JSON.stringify(payload), contentType: "application/json", accept: "application/json", modelId, }); try { const response = await client.send(command); const decodedResponseBody = new TextDecoder().decode(response.body); /** @type {ResponseBody} */ const responseBody = JSON.parse(decodedResponseBody); return responseBody.generation; } catch (err) { if (err instanceof AccessDeniedException) { console.error( `Access denied. Ensure you have the correct permissions to invoke ${modelId}.`, ); } else { throw err; } } }; // Invoke the function if this file was run directly. if (process.argv[1] === fileURLToPath(import.meta.url)) { const prompt = 'Complete the following: "Once upon a time..."'; console.log("\nModel: Meta Llama 2 Chat"); console.log(`Prompt: ${prompt}`); const completion = await invokeLlama2(prompt); console.log("Completion:"); console.log(completion); console.log("\n"); }
-
有关 API 的详细信息,请参阅 Amazon SDK for JavaScript API 参考InvokeModel中的。
-
以下代码示例显示了如何在 Amazon Bedrock 上调用 Mistral 7B 模型进行文本生成。
- 适用于 JavaScript (v3) 的软件开发工具包
-
注意
还有更多相关信息 GitHub。在 Amazon 代码示例存储库
中查找完整示例,了解如何进行设置和运行。 调用 Mistral 7B 基础模型生成文本。
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { fileURLToPath } from "url"; import { AccessDeniedException, BedrockRuntimeClient, InvokeModelCommand, } from "@aws-sdk/client-bedrock-runtime"; /** * @typedef {Object} Output * @property {string} text * * @typedef {Object} ResponseBody * @property {Output[]} outputs */ /** * Invokes the Mistral 7B model to run an inference using the input * provided in the request body. * * @param {string} prompt - The prompt that you want Mistral to complete. * @returns {string[]} A list of inference responses (completions) from the model. */ export const invokeMistral7B = async (prompt) => { const client = new BedrockRuntimeClient({ region: "us-west-2" }); const modelId = "mistral.mistral-7b-instruct-v0:2"; // Mistral instruct models provide optimal results when embedding // the prompt into the following template: const instruction = `<s>[INST] ${prompt} [/INST]`; const payload = { prompt: instruction, max_tokens: 500, temperature: 0.5, }; const command = new InvokeModelCommand({ body: JSON.stringify(payload), contentType: "application/json", accept: "application/json", modelId, }); try { const response = await client.send(command); const decodedResponseBody = new TextDecoder().decode(response.body); /** @type {ResponseBody} */ const responseBody = JSON.parse(decodedResponseBody); return responseBody.outputs.map((output) => output.text); } catch (err) { if (err instanceof AccessDeniedException) { console.error( `Access denied. Ensure you have the correct permissions to invoke ${modelId}.`, ); } else { throw err; } } }; // Invoke the function if this file was run directly. if (process.argv[1] === fileURLToPath(import.meta.url)) { const prompt = 'Complete the following: "Once upon a time..."'; console.log("\nModel: Mistral 7B"); console.log(`Prompt: ${prompt}`); const completions = await invokeMistral7B(prompt); completions.forEach((completion) => { console.log("Completion:"); console.log(completion); console.log("\n"); }); }
-
有关 API 的详细信息,请参阅 Amazon SDK for JavaScript API 参考InvokeModel中的。
-
以下代码示例显示了如何在 Amazon Bedrock 上调用 Mixtral 8x7b 模型模型以生成文本。
- 适用于 JavaScript (v3) 的软件开发工具包
-
注意
还有更多相关信息 GitHub。在 Amazon 代码示例存储库
中查找完整示例,了解如何进行设置和运行。 调用 Mixtral 8x7b 基础模型来生成文本。
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { fileURLToPath } from "url"; import { AccessDeniedException, BedrockRuntimeClient, InvokeModelCommand, } from "@aws-sdk/client-bedrock-runtime"; import { invokeMistral7B } from "./invoke-mistral7b.js"; /** * @typedef {Object} Output * @property {string} text * * @typedef {Object} ResponseBody * @property {Output[]} outputs */ /** * Invokes the Mixtral 8x7B model to run an inference using the input * provided in the request body. * * @param {string} prompt - The prompt that you want Mistral to complete. * @returns {string[]} A list of inference responses (completions) from the model. */ export const invokeMixtral8x7B = async (prompt) => { const client = new BedrockRuntimeClient({ region: "us-west-2" }); // Mistral instruct models provide optimal results when embedding // the prompt into the following template: const instruction = `<s>[INST] ${prompt} [/INST]`; const modelId = "mistral.mixtral-8x7b-instruct-v0:1"; const payload = { prompt: instruction, max_tokens: 500, temperature: 0.5, }; const command = new InvokeModelCommand({ body: JSON.stringify(payload), contentType: "application/json", accept: "application/json", modelId, }); try { const response = await client.send(command); const decodedResponseBody = new TextDecoder().decode(response.body); /** @type {ResponseBody} */ const responseBody = JSON.parse(decodedResponseBody); return responseBody.outputs.map((output) => output.text); } catch (err) { if (err instanceof AccessDeniedException) { console.error( `Access denied. Ensure you have the correct permissions to invoke ${modelId}.`, ); } else { throw err; } } }; // Invoke the function if this file was run directly. if (process.argv[1] === fileURLToPath(import.meta.url)) { const prompt = 'Complete the following: "Once upon a time..."'; console.log("\nModel: Mixtral 8x7B"); console.log(`Prompt: ${prompt}`); const completions = await invokeMistral7B(prompt); completions.forEach((completion) => { console.log("Completion:"); console.log(completion); console.log("\n"); }); }
-
有关 API 的详细信息,请参阅 Amazon SDK for JavaScript API 参考InvokeModel中的。
-