Effective Prompting
Learn techniques for writing prompts that get better results from Chat.
The assistant understands natural language, but some prompting techniques help it work more effectively. These guidelines help you get faster, more accurate answers.
Be specific about time
Time ranges are critical for log searches. The assistant defaults to recent data if you do not specify, which may not include the issue you are investigating.
Show me errors from the last hour.
What happened between 2pm and 3pm PST yesterday?
Find logs from the deployment at 2024-03-15T14:30:00Z.Show me recent errors.
What happened yesterday?
Find logs from the deployment.Specific time ranges help the assistant:
- Narrow the search window for faster results
- Find events that correlate with incidents
- Avoid returning irrelevant data from other time periods
Name services and components
The assistant can search across all your logs, but naming specific services helps it focus on the right data.
Why is the checkout-service returning 503 errors?
Show me logs from the payment-gateway pod.
What errors are coming from api-gateway in the prod-us-west cluster?Why are we getting errors?
Show me the logs.
What is broken?If you are not sure which service is involved, you can ask the assistant to help identify it:
We are seeing checkout failures. Which services are involved in the checkout
flow, and which ones are showing errors?Describe symptoms, not solutions
Tell the assistant what you observe, not what you think the fix should be. This gives it freedom to investigate without bias.
Users are reporting that checkout takes over 30 seconds.
The dashboard is showing a spike in 5xx errors starting at 2pm.
Memory usage on the API servers jumped from 2GB to 6GB.I think the database connection pool is too small.
We need to increase the timeout.
The cache must be broken.Describing symptoms lets the assistant:
- Consider multiple possible causes
- Find unexpected contributing factors
- Avoid confirmation bias in the investigation
Ask follow-up questions
Investigations rarely complete in a single prompt. Use follow-up questions to drill deeper into what the assistant finds.
Initial prompt:
Why are we seeing elevated error rates on the API?Follow-ups based on findings:
Which endpoints specifically are returning 500 errors?
What do the stack traces show for those errors?
Did anything change in the database around that time?
When did this start happening?Each follow-up builds on the context of the previous messages, so the assistant remembers what it already found.
Use thread context
The assistant remembers the entire conversation within a thread. You can reference previous findings without repeating details.
You mentioned the payment service was showing timeouts. Can you show me
the slowest queries from that service in the last hour?Go back to the error we found earlier and look for related warnings in
the five minutes before it occurred.This conversational context is one of Sazabi's key advantages over traditional query interfaces.
Example prompts by use case
Investigating an outage
We got paged for high error rates on checkout at 3:15pm. What was happening
at that time?Show me all services that had elevated error rates between 3:15pm and 3:30pm.What deployments or config changes happened in the hour before the incident?Debugging a performance issue
Users in Europe are reporting slow page loads on the dashboard. What is the
p99 latency for requests from EU regions?Break down the latency by service. Which component is contributing the most
to total response time?Show me the slowest database queries from the dashboard-backend in the last
30 minutes.Understanding a new error
We are seeing a new NullPointerException in the order service. Show me the
full stack trace and any context from the surrounding logs.When did this error first appear? Was there a deployment around that time?How many times has this error occurred, and is it affecting all users or
specific ones?Preparing for an incident review
Create a timeline of events for the checkout outage on March 15th between
2pm and 4pm.Which services were affected, and in what order did the errors propagate?What was the root cause, and what actions were taken to resolve it?Tips for complex investigations
Start broad, then narrow
Begin with an open-ended question to let the assistant survey the landscape, then drill down based on what it finds.
What is the overall health of the payment system right now?Then:
You mentioned elevated latency on stripe-connector. Show me the request
logs for that service.Provide context when switching topics
If you shift to a different investigation within the same thread, give the assistant context:
Let's look at something different now. Earlier today we saw memory issues
on the worker pods. Can you pull logs from the worker-processor service
between 10am and 11am?Ask the assistant to explain its reasoning
If you want to understand why the assistant reached a conclusion:
Walk me through how you determined that the database was the bottleneck.Request specific output formats
If you need data in a particular format:
Summarize the top 5 error types in a table with counts and percentages.Create a timeline diagram showing when each service started failing.When the assistant cannot find an answer
Sometimes the assistant cannot find the data you need. Common reasons include:
- Data is not in Sazabi: Logs from the relevant service may not be configured to send to Sazabi.
- Time range mismatch: The data may have aged out of retention or the time range may be wrong.
- Wrong project selected: The data may be in a different project.
When this happens, the assistant tells you what it searched and suggests alternatives. You can then adjust your query or check your data source configuration.