Release Date: Nov 18, 2025
Hey Sparrow Explorers
,
We’re excited to announce Sparrow’s latest update — introducing a powerful new
Test Data experience
designed to make managing, linking, and maintaining datasets easier than ever! 🚀
This release includes a dedicated Test Data module, enhanced import and instant preview capabilities, smart duplicate-file handling, and complete test data integration across scheduled runs and execution results. Whether you’re managing complex API tests or validating multiple environments, Sparrow now keeps all your datasets organized, traceable, and instantly accessible.
🚀
What’s New?
📘
Unified Test Data Experience Inside Test Flows
  • The Test Data feature is now fully integrated inside Test Flows, giving you a single place to manage all datasets used in automation. A dedicated Test Data tab displays a clean, structured list showing file name, type, size, and last updated time, making navigation simple and intuitive. You can import, validate, preview, and track JSON/CSV files across the entire execution flow, with automatic preview panels that show metadata and a table view of the dataset. Linked datasets remain visible in Scheduled Runs and Run Results, with clear indicators or warnings for missing files — ensuring full traceability and smoother debugging.
⚙️
Core Functionality & Workflow
  • Test Data can be added, viewed, searched, and refreshed directly inside Test Flows. JSON/CSV imports undergo backend validation, and successful uploads automatically open a preview with metadata and a table view. Linked datasets are shown inside Scheduled Runs and Run Results, including warning states for missing files. Sparrow preserves full end-to-end linkage to ensure consistent and accurate test execution every time.
🎯
Practical Benefits for Testing
  • Centralizing datasets reduces confusion and speeds up test planning. Validation ensures only correct files enter your workflow, preventing unexpected run failures. The instant preview helps confirm data accuracy before executing flows. Visibility of datasets in scheduled runs and results makes debugging easier and improves auditability. Overall, these updates make data-driven testing simpler, safer, and more reliable.
⚠️
Note
The
macOS build
is temporarily unavailable for this release due to a technical issue. Our team is actively working on a fix and will notify users once it’s ready.
In the meantime, please use the
Webapp
for the latest updates.
⚠️
Known Glitches
Here’s what we’re still refining because perfection takes time:
  • Certain AI models may throw execution errors or exhibit performance lags depending on the serving server.
  • At times, the chatbot may display a "Something went wrong" error, though functionality resumes normally afterward.
  • Query Explorer and Editor sync issues might play a bit of hide-and-seek in GraphQL.
  • Dynamic variable creation might lead to partial data capture.
  • Swagger YAML link is not supported while adding a Collection for Active Sync.
  • Sometimes, switching models does not retain the previous logic or context.
  • Fixing script errors may take more than one attempt, and AI may not always give the expected result.
  • File upload for LLMs is currently in beta. You may experience occasional issues with uploading, file preview visibility, or model compatibility.
💡
Help Us Improve
Your feedback helps us shape Sparrow’s future! Found a bug or have an idea? Let’s hear it!
📖
App Help Section
: Find answers here.
💡
Feedback Hub
: Sparrow REST API Tool
Thank you for being a driving force in the Sparrow journey. Your constant support propels us forward, ensuring we achieve new milestones with every update. ✨
Let’s keep aiming higher, unlocking new potential, and creating something extraordinary together!
With ❤️ and gratitude,
The Sparrow Team