new
improved
fixed
Sparrow v2.33: Smarter Test Scheduling & Run Management!
Release Date: Oct 15, 2025
Hey Sparrow Explorers
,We’re excited to introduce Sparrow v2.33, bringing you powerful updates to streamline your schedule management and test result visibility. 🚀 From editing scheduled runs to exploring detailed test flow results — this release is packed with usability improvements, instant updates, and smoother controls!
🚀
What’s New?
✅
Schedule Run Management
- Schedule Run Management enables you to automate test execution at predefined times or intervals, eliminating the need for manual intervention. By configuring when and how frequently your tests run, selecting execution environments, and setting up notifications, you can maintain consistent test coverage and quickly identify issues through automated monitoring.
+
Set Scheduled Runs
- The Schedule Run feature lets you automate test executions by selecting the environment, run frequency (Once, Daily, Hourly, Weekly), and timing. You can name the schedule, add email recipients, and set notification preferences. Once configured, clicking Schedule Run activates it. This ensures tests run consistently without manual input.
🧩
Edit Scheduled Runs
- The Edit Scheduled Runs feature allows you to update existing test schedules. You can modify the name, environment, frequency, timing, email recipients, and notification preferences. Changes are saved to adjust future runs without affecting past executions.
🗑️
Delete Scheduled Runs
- The Delete action permanently removes a schedule and all its related configuration settings from the system. Before the deletion is finalized, a confirmation dialog appears to prevent accidental removal. Once confirmed, the action cannot be undone.
⚙️
Run Management — Take Full Control of Your Test Runs
- We’ve redesigned how you monitor and manage your executed runs — both manual and scheduled!
🧾
Unified Test Results Tab
- The Test Results tab provides a chronological view of all test executions, showing whether each run was triggered manually or via a schedule. It displays key details like run time, status (Completed or Error), total API requests, pass/fail counts, and execution time. Users can sort results, access additional actions via a three-dot menu, and click any row for detailed logs. Pagination and navigation controls help browse through large result sets efficiently.
🗑️
Delete Test Results
- Test results can be deleted easily via the three-dots menu with a confirmation step in a modal. Once confirmed, the result is instantly removed without refreshing the page. Users receive success or error toasts for immediate feedback.
🔍
Detailed Test Result View
- The Test Result View, showing key details like schedule name, status (Active, Inactive, Expired), environment, next run, and last run result. Users can toggle schedules on/off or view statuses such as “Paused” or “Completed.” Each schedule shows the next execution time and summary of the last run with colour-coded indicators (Passed, Failed, etc.). Tooltips offer deeper insights like test counts and response times. Actions like Run Now, Edit, or Delete let users control schedules easily.
For more details and guidance please visit our website:
- Please note that the macOS build is currently unavailablefor this release due to a temporary technical issue. Our team is actively working to resolve the glitch, and once the updated version is available, we will promptly notify all users.
⚠️
Known Glitches
Here’s what we’re still refining because perfection takes time:
- Certain AI models may throw execution errors or exhibit performance lags - depending on the serving server.
- At times, the chatbot may display a "Something went wrong" error, though functionality resumes normally afterward.
- Query Explorer and Editor sync issues might play a bit of hide-and-seek in GraphQL.
- Dynamic variable creation might lead to partial data capture.
- Swagger YAML link is not supported while adding a Collection for Active Sync.
- Sometimes, switching models does not retain the previous logic or context.
- Fixing script errors may take more than one attempt, and AI may not always give the expected result.
- File upload for LLMs is currently in beta. You may experience occasional issues with uploading, file preview visibility, or model compatibility.
💡
Help Us Improve!
Your feedback helps us shape Sparrow’s future! Found a bug or have an idea? Let’s hear it!
📖
App Help Section
: Find answers here.💡
Feedback Hub
: https://sparrowapp.canny.io/📩
Email Us
: contactus@sparrowapp.devThank you for being a driving force in the Sparrow journey. Your constant support propels us forward, ensuring we achieve new milestones with every update. ✨
Let’s keep aiming higher, unlocking new potential, and creating something extraordinary together!
With ❤️ and gratitude,
The Sparrow Team