Understanding Responses
Learn how to interpret tool calls, artifacts, and evidence in the assistant's responses.
When the assistant responds to your questions, it shows not just the answer but the reasoning and evidence behind it. This transparency helps you verify findings and guide the investigation.
Text responses
The most common response type is direct text. The assistant provides explanations, summaries, and answers to your questions in natural language. Text responses may include:
- Direct answers to your questions
- Explanations of what was found
- Recommendations for next steps
- Context about your systems based on the data
Reasoning (extended thinking)
For complex questions, the assistant shows its reasoning process. This helps you understand how it approached the problem and verify its logic.
Expanding reasoning
Look for the expandable "Thinking..." section at the top of complex responses. Click to expand and see the assistant's internal reasoning.
Understanding the approach
The reasoning shows how the assistant broke down the problem, what hypotheses it considered, and why it chose its investigation path.
Verifying logic
Use the reasoning to verify the assistant's conclusions align with your understanding of your systems.
Tool calls
When the assistant investigates, it uses tools. Each tool call appears as an expandable step showing what the assistant did.
Tool call displays include:
- Tool name: Which tool was called (e.g., "Log Search", "Run Code")
- Parameters: The inputs the assistant provided to the tool
- Result: What the tool returned
Example: A log search tool call might show "Queried logs for service:api with level:error in the last hour" and display "Found 47 matching entries".
Click any tool call to expand it and see the full details, including the raw parameters and complete response.
Artifacts
Artifacts are visual outputs the assistant creates during investigations. They help you understand data patterns and findings at a glance.
| Type | Description |
|---|---|
timeseries | Time-series charts showing trends and patterns over time |
table | Tabular data with sortable columns and filtering |
logDetail | Detailed view of a specific log entry with all fields |
alertCard | Summary card for an alert with status and metadata |
mermaid | Diagrams including flowcharts, sequence diagrams, and architecture views |
Artifacts appear inline in the conversation. You can:
- Hover to see additional details
- Click to expand for a full view
- Download charts and tables for sharing
Evidence and citations
The assistant backs its claims with data. When it makes a statement about your systems, it provides evidence:
- Links to specific log entries: Click to see the source data
- Timestamps: When events occurred
- Counts and aggregations: Quantified findings (e.g., "47 errors in the last hour")
- Field values: Specific data from your logs and traces
Evidence links remain valid as long as the underlying data exists in your log storage. You can return to a thread and follow evidence links to verify findings.
Follow-up suggestions
After completing an investigation step, the assistant often suggests next questions you might ask:
- Suggestions are based on what was found during the investigation
- Click any suggestion to immediately send that question
- Suggestions help you explore related issues without typing new queries
Follow-up suggestions are especially useful when you are new to investigating an issue. They guide you toward the most useful next steps based on the assistant's analysis of what it found.