Skip to main content

Advanced Features with Gemini

Gemini provides several advanced features for fine-tuning generation, handling streaming responses, and maintaining conversation context.

Generation Configuration

Control the output characteristics using generation configuration parameters.

import { GoogleGenerativeAI } from "@google/generative-ai";

const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
const model = genAI.getGenerativeModel({
model: "gemini-2.0-flash",
generationConfig: {
maxOutputTokens: 100,
temperature: 0.5,
stopSequences: ["\n"]
}
});

const result = await model.generateContent(
"Explain quantum computing in simple terms."
);
console.log(result.response.text());

For information about chat conversations, see the Chat Conversations documentation.

Code Execution Tool

Enable the model to execute Python code in a sandbox environment.

import { GoogleGenerativeAI } from "@google/generative-ai";

const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
const model = genAI.getGenerativeModel({
model: "gemini-2.0-flash",
tools: ["code_execution"]
});

const result = await model.generateContent(
"Calculate the sum of the first 40 odd numbers. " +
"Please write a Python program to do this and return the result."
);
console.log(result.response.text());

Configuration Options

Generation Config Parameters

  • maxOutputTokens: Maximum length of the generated response
  • temperature: Controls randomness (0.0 to 1.0)
  • stopSequences: Array of strings where generation should stop
  • topP: Nucleus sampling parameter
  • topK: Top-k sampling parameter
  • candidateCount: Number of candidate responses to generate

Best Practices

  1. For Generation Config:

    • Use lower temperature (0.2-0.4) for factual responses
    • Use higher temperature (0.7-0.9) for creative tasks
    • Set appropriate maxOutputTokens based on your needs
  2. For Code Execution:

    • Validate code before execution
    • Set appropriate timeouts
    • Handle execution errors gracefully

For information about streaming responses, see the Streaming Responses documentation.