Completing a task with Code Interpreter
Note
This feature is currently in Preview.
In Yandex Cloud AI Studio, you can use Code Interpreter to enhance the model's abilities so it can write and execute Python code in a sandboxed environment. This tool will prove useful in tasks where the model calculates, validates, and transforms data instead of being limited to textual reasoning.
Note
Sessions with Code Interpreter are context-loaded (code, data, execution results). We recommend models with a large context window for these sessions, e.g., Qwen.
To use the example, you will need a service account with the ai.assistants.editor and ai.languageModels.user roles and an API key with the yc.ai.foundationModels.execute scope. The API key you can create in AI Studio has these permissions. Refer to the Getting started section for an example of how to configure your runtime environment.
Create an agent
import openai
import json
import os
YANDEX_MODEL = "qwen3-235b-a22b-fp8"
YANDEX_API_KEY = os.getenv('YANDEX_API_KEY')
YANDEX_FOLDER_ID = os.getenv('YANDEX_FOLDER_ID')
client = openai.OpenAI(
api_key=YANDEX_API_KEY,
base_url="https://llm.api.cloud.yandex.net/v1",
project=YANDEX_FOLDER_ID
)
instruction = """
You are a Python programmer and can write and run code to solve the task you are given.
First, check if you have the necessary libraries, and if not, install them.
"""
prompt = """
Give me a detailed pptx presentation on derivatives: what they are, how to calculate them. Add some infographics.
There must be at least 5 slides in the presentation.
"""
stream = client.responses.create(
model=f"gpt://{YANDEX_FOLDER_ID}/{YANDEX_MODEL}",
input=prompt,
tool_choice="auto",
temperature=0.3,
tools=[
{
"type": "code_interpreter",
"container": {
"type": "auto",
}
}
],
stream=True
)
resp_id = None
print("Request processing started...\n")
for event in stream:
if event.type == "response.output_text.delta":
print(event.delta, end='')
elif event.type == "response.code_interpreter_call_code.delta":
print(event.delta, end='')
elif event.type == "response.reasoning_text.delta":
print(event.delta, end='')
elif event.type == "response.reasoning_summary_text.delta":
print(event.delta, end='')
elif event.type == "response.code_interpreter_call_code.done":
print(f"\n\nFinal code:\n{event.code}\n")
elif event.type == "response.code_interpreter_call.in_progress":
print("\n[Executing the code...]\n")
elif event.type == "response.code_interpreter_call.done":
print("\n[Code executed]\n")
elif event.type == "response.in_progress":
resp_id = event.response.id
print(f"\n[Processing the {resp_id}] response\n")
print(f"\n\nTask solved: {resp_id}\n")
print("=" * 50 + "\n")
# Getting a full response
response = client.responses.retrieve(resp_id)
# Processing results and downloading files
print("Processing the execution results:")
os.makedirs("./downloaded_files", exist_ok=True)
downloaded_count = 0
for item in response.output:
# Outputting code execution results
if item.type == "code_interpreter_call":
print("\nCode:\n")
print(item.code, '\n')
for output_item in item.outputs:
output_type = output_item.type
logs = output_item.logs.strip()
if logs:
print(f"[{output_type.upper()}] Output:")
for log_line in logs.split('\n'):
print(f" {log_line}")
# Downloading files from a container
elif item.type == "message":
for content in item.content:
# Checking for annotations with files
if hasattr(content, 'annotations') and content.annotations:
for annotation in content.annotations:
if annotation.type == "container_file_citation":
file_id = annotation.file_id
file_name = annotation.filename
print(f"\nš File found: {file_name} (ID: {file_id})")
try:
# Downloading file
file_content = client.files.content(file_id)
# Saving locally
local_path = os.path.join("./downloaded_files", file_name)
with open(local_path, 'wb') as f:
f.write(file_content.read())
print(f"ā
File saved: {local_path}")
downloaded_count += 1
except Exception as e:
print(f"ā Error downloading {file_name}: {e}")
if downloaded_count > 0:
print(f"\nā
Total files downloaded: {downloaded_count}")
else:
print("\nā¹ļø No files found for download.")
print("\n" + "=" * 50 + "\n")
# Full response
print("Full response (JSON):")
print(json.dumps(response.model_dump(), indent=2, ensure_ascii=False))
Where:
YANDEX_API_KEY: API key for access to AI Studio.YANDEX_FOLDER_ID: Service account folder ID.YANDEX_MODEL: Model name to handle the request. For coding tasks, we recommend using models that support reasoning.temperature: Generation temperature. The lower its value, the more reliable and predictable code you get.stream=True: Enables event streaming to display execution progress in real time.
Next to the file, a folder named downloaded_files will be created containing the model's output: an PPTX presentation and graphs. A progress report will be sent to the console.
Response fragment
I have successfully created a detailed PowerPoint (PPTX) presentation on derivatives for you. The presentation contains **6 slides**, including the title slide, theoretical explanations, differentiation rules, examples, a graphical interpretation with infographics, and info on real-life use of derivatives.
### Presentation contents:
1. **Title slide**: Title and subtitle.
2. **What is a derivative?**: Definition, geometrical meaning, notation.
3. **Differentiation rules**: Basic formulas and laws.
4. **Calculation examples**: Step-by-step calculations for various functions.
5. **Graphical interpretation**: Graph of the \(y = x^2 \) function and tangent at the \(x = 1 \) point (with visualization).
6. **Use of derivatives**: Physics, economics, machine learning, other fields.
...
š File found: Proizvodnye_Presentation.pptx (ID: fvttk7sto2ne********)
ā
File saved: ./downloaded_files\Proizvodnye_Presentation.pptx
š File found: tangent_plot.png (ID: fvtt18umj1gn********)
ā
File saved: ./downloaded_files\tangent_plot.png
ā
Total files downloaded: 2
const OpenAI = require("openai");
const fs = require("fs");
const path = require("path");
const YANDEX_MODEL = "qwen3-235b-a22b-fp8";
const YANDEX_API_KEY = process.env.YANDEX_API_KEY;
const YANDEX_FOLDER_ID = process.env.YANDEX_FOLDER_ID;
const client = new OpenAI({
apiKey: YANDEX_API_KEY,
baseURL: "https://llm.api.cloud.yandex.net/v1",
project: YANDEX_FOLDER_ID,
});
const instruction = `
You are a Python programmer and can write and run code to solve the task you are given.
First, check if you have the necessary libraries, and if not, install them.
`;
const prompt = `
Give me a detailed pptx presentation on derivatives: what they are, how to calculate them. Add some infographics.
There must be at least 5 slides in the presentation.
`;
async function main() {
const stream = await client.responses.create({
model: `gpt://${YANDEX_FOLDER_ID}/${YANDEX_MODEL}`,
input: prompt,
tool_choice: "auto",
temperature: 0.3,
tools: [
{
type: "code_interpreter",
container: {
type: "auto",
},
},
],
stream: true,
});
let respId = null;
console.log("Request processing started...\n");
for await (const event of stream) {
if (event.type === "response.output_text.delta") {
process.stdout.write(event.delta);
} else if (event.type === "response.code_interpreter_call_code.delta") {
process.stdout.write(event.delta);
} else if (event.type === "response.reasoning_text.delta") {
process.stdout.write(event.delta);
} else if (event.type === "response.reasoning_summary_text.delta") {
process.stdout.write(event.delta);
} else if (event.type === "response.code_interpreter_call_code.done") {
console.log(`\n\nFinal code:\n${event.code}\n`);
} else if (event.type === "response.code_interpreter_call.in_progress") {
console.log("\n[Executing the code...]\n");
} else if (event.type === "response.code_interpreter_call.done") {
console.log("\n[Code executed]\n");
} else if (event.type === "response.in_progress") {
respId = event.response.id;
console.log(`\n[Processing the ${respId}] response\n`);
}
}
console.log(`\n\nTask solved: ${respId}\n`);
console.log("=".repeat(50) + "\n");
// Getting a full response
const response = await client.responses.retrieve(respId);
// Processing results and downloading files
console.log("Processing the execution results:");
fs.mkdirSync("./downloaded_files", { recursive: true });
let downloadedCount = 0;
for (const item of response.output) {
// Outputting code execution results
if (item.type === "code_interpreter_call") {
console.log("\nCode:\n");
console.log(item.code, "\n");
for (const outputItem of item.outputs) {
const outputType = outputItem.type;
const logs = outputItem.logs?.trim();
if (logs) {
console.log(`[${outputType.toUpperCase()}] Output:`);
for (const logLine of logs.split("\n")) {
console.log(` ${logLine}`);
}
}
}
}
// Downloading files from a container
else if (item.type === "message") {
for (const content of item.content) {
// Checking for annotations with files
if (content.annotations && content.annotations.length > 0) {
for (const annotation of content.annotations) {
if (annotation.type === "container_file_citation") {
const fileId = annotation.file_id;
const fileName = annotation.filename;
console.log(`\nš File found: ${fileName} (ID: ${fileId})`);
try {
// Downloading file
const fileContent = await client.files.content(fileId);
// Saving locally
const localPath = path.join("./downloaded_files", fileName);
const buffer = Buffer.from(await fileContent.arrayBuffer());
fs.writeFileSync(localPath, buffer);
console.log(`ā
File saved: ${localPath}`);
downloadedCount++;
} catch (e) {
console.log(`ā Error downloading ${fileName}: ${e.message}`);
}
}
}
}
}
}
}
if (downloadedCount > 0) {
console.log(`\nā
Total files downloaded: ${downloadedCount}`);
} else {
console.log("\nā¹ļø No files found for download.");
}
console.log("\n" + "=".repeat(50) + "\n");
// Full response
console.log("Full response (JSON):");
console.log(JSON.stringify(response, null, 2));
}
main().catch(console.error);
Where:
YANDEX_API_KEY: API key for access to AI Studio.YANDEX_FOLDER_ID: Service account folder ID.YANDEX_MODEL: Model name to handle the request. For coding tasks, we recommend using models that support reasoning.temperature: Generation temperature. The lower its value, the more reliable and predictable code you get.stream: Enables event streaming to display execution progress in real time.
Next to the file, a folder named downloaded_files will be created containing the model's output: an PPTX presentation and graphs. A progress report will be sent to the console.
Response fragment
I have successfully created a detailed PowerPoint (PPTX) presentation on derivatives for you. The presentation contains **6 slides**, including the title slide, theoretical explanations, differentiation rules, examples, a graphical interpretation with infographics, and info on real-life use of derivatives.
### Presentation contents:
1. **Title slide**: Title and subtitle.
2. **What is a derivative?**: Definition, geometrical meaning, notation.
3. **Differentiation rules**: Basic formulas and laws.
4. **Calculation examples**: Step-by-step calculations for various functions.
5. **Graphical interpretation**: Graph of the \(y = x^2 \) function and tangent at the \(x = 1 \) point (with visualization).
6. **Use of derivatives**: Physics, economics, machine learning, other fields.
...
š File found: Proizvodnye_Presentation.pptx (ID: fvttk7sto2ne********)
ā
File saved: ./downloaded_files\Proizvodnye_Presentation.pptx
š File found: tangent_plot.png (ID: fvtt18umj1gn********)
ā
File saved: ./downloaded_files\tangent_plot.png
ā
Total files downloaded: 2
package main
import (
"context"
"encoding/json"
"fmt"
"io"
"log"
"os"
"path/filepath"
"github.com/openai/openai-go"
"github.com/openai/openai-go/option"
"github.com/openai/openai-go/responses"
)
const YANDEX_MODEL = "qwen3-235b-a22b-fp8"
var (
YANDEX_API_KEY = os.Getenv("YANDEX_API_KEY")
YANDEX_FOLDER_ID = os.Getenv("YANDEX_FOLDER_ID")
)
const instruction = `
You are a Python programmer and can write and run code to solve the task you are given.
First, check if you have the necessary libraries, and if not, install them.
`
const prompt = `
Give me a detailed pptx presentation on derivatives: what they are, how to calculate them. Add some infographics.
There must be at least 5 slides in the presentation.
`
func main() {
client := openai.NewClient(
option.WithAPIKey(YANDEX_API_KEY),
option.WithBaseURL("https://llm.api.cloud.yandex.net/v1"),
option.WithProject(YANDEX_FOLDER_ID),
)
model := fmt.Sprintf("gpt://%s/%s", YANDEX_FOLDER_ID, YANDEX_MODEL)
stream := client.Responses.NewStreaming(context.Background(), responses.ResponseNewParams{
Model: model,
Instructions: openai.String(instruction),
Input: responses.ResponseNewParamsInputUnion{
OfString: openai.String(prompt),
},
ToolChoice: responses.ResponseNewParamsToolChoiceUnion{
OfToolChoiceMode: openai.Opt(responses.ToolChoiceOptionsAuto),
},
Temperature: openai.Float(0.3),
Tools: []responses.ToolUnionParam{
{
OfCodeInterpreter: &responses.ToolCodeInterpreterParam{
Container: responses.ToolCodeInterpreterContainerUnionParam{
OfCodeInterpreterContainerAuto: &responses.ToolCodeInterpreterContainerCodeInterpreterContainerAutoParam{},
},
},
},
},
})
var respID string
fmt.Println("Request processing started...\n")
for stream.Next() {
event := stream.Current()
switch event.Type {
case "response.output_text.delta":
fmt.Print(event.Delta)
case "response.code_interpreter_call_code.delta":
fmt.Print(event.Delta)
case "response.reasoning_text.delta":
fmt.Print(event.Delta)
case "response.reasoning_summary_text.delta":
fmt.Print(event.Delta)
case "response.code_interpreter_call_code.done":
fmt.Printf("\n\nFinal code:\n%s\n", event.Code)
case "response.code_interpreter_call.in_progress":
fmt.Println("\n[Executing the code...]\n")
case "response.code_interpreter_call.completed":
fmt.Println("\n[Code executed]\n")
case "response.in_progress":
respID = event.Response.ID
fmt.Printf("\n[Processing the response %s]\n", respID)
}
}
if err := stream.Err(); err != nil {
log.Fatalf("Error streaming: %v", err)
}
fmt.Printf("\n\nTask solved: %s\n", respID)
fmt.Println("=" + fmt.Sprintf("%-50s", "") + "\n")
// Getting a full response
response, err := client.Responses.Get(context.Background(), respID, responses.ResponseGetParams{})
if err != nil {
log.Fatalf("Error getting a response: %v", err)
}
// Processing results and downloading files
fmt.Println("Processing the execution results:")
if err := os.MkdirAll("./downloaded_files", 0755); err != nil {
log.Fatalf("Error creating the directory: %v", err)
}
downloadedCount := 0
for _, item := range response.Output {
switch item.Type {
case "code_interpreter_call":
ci := item.AsCodeInterpreterCall()
fmt.Println("\nCode:\n")
fmt.Println(ci.Code, "\n")
for _, output := range ci.Outputs {
if output.Logs != "" {
fmt.Printf("[%s] Output:\n", output.Type)
fmt.Printf(" %s\n", output.Logs)
}
}
case "message":
msg := item.AsMessage()
for _, content := range msg.Content {
if content.Type == "output_text" {
text := content.AsOutputText()
for _, annotation := range text.Annotations {
if annotation.Type == "container_file_citation" {
fileID := annotation.FileID
fileName := annotation.Filename
fmt.Printf("\nš File found: %s (ID: %s)\n", fileName, fileID)
fileContent, err := client.Files.Content(context.Background(), fileID)
if err != nil {
fmt.Printf("ā Error downloading %s: %v\n", fileName, err)
continue
}
localPath := filepath.Join("./downloaded_files", fileName)
data, err := io.ReadAll(fileContent.Body)
if err != nil {
fmt.Printf("ā Error reading %s: %v\n", fileName, err)
continue
}
if err := os.WriteFile(localPath, data, 0644); err != nil {
fmt.Printf("ā Error saving %s: %v\n", fileName, err)
continue
}
fmt.Printf("ā
File saved: %s\n", localPath)
downloadedCount++
}
}
}
}
}
}
if downloadedCount > 0 {
fmt.Printf("\nā
Total files downloaded: %d\n", downloadedCount)
} else {
fmt.Println("\nā¹ļø No files found for download.")
}
fmt.Println("\n" + "==================================================\n")
// Full response
fmt.Println("Full response (JSON):")
finalJSON, _ := json.MarshalIndent(response, "", " ")
fmt.Println(string(finalJSON))
}
Where:
YANDEX_API_KEY: API key for access to AI Studio.YANDEX_FOLDER_ID: Service account folder ID.YANDEX_MODEL: Model name to handle the request. For coding tasks, we recommend using models that support reasoning.temperature: Generation temperature. The lower its value, the more reliable and predictable code you get.stream: Enables event streaming to display execution progress in real time.
Next to the file, a folder named downloaded_files will be created containing the model's output: an PPTX presentation and graphs. A progress report will be sent to the console.
Response fragment
I have successfully created a detailed PowerPoint (PPTX) presentation on derivatives for you. The presentation contains **6 slides**, including the title slide, theoretical explanations, differentiation rules, examples, a graphical interpretation with infographics, and info on real-life use of derivatives.
### Presentation contents:
1. **Title slide**: Title and subtitle.
2. **What is a derivative?**: Definition, geometrical meaning, notation.
3. **Differentiation rules**: Basic formulas and laws.
4. **Calculation examples**: Step-by-step calculations for various functions.
5. **Graphical interpretation**: Graph of the \(y = x^2 \) function and tangent at the \(x = 1 \) point (with visualization).
6. **Use of derivatives**: Physics, economics, machine learning, other fields.
...
š File found: Proizvodnye_Presentation.pptx (ID: fvttk7sto2ne********)
ā
File saved: ./downloaded_files\Proizvodnye_Presentation.pptx
š File found: tangent_plot.png (ID: fvtt18umj1gn********)
ā
File saved: ./downloaded_files\tangent_plot.png
ā
Total files downloaded: 2