Author |
Topic  |
|
novalee
USA
1 Posts |
Posted - 08/19/2025 : 08:42:36 AM
|
I’m currently preparing for the istqb certified tester performance testing (ct-pt) exam and also working on real-world performance test reports. One area i want to optimize is visualizing test performance metrics (like response times, throughput, error rates, and scalability patterns) using Origin.
In tools like JMeter or LoadRunner, I can export CSV logs with metrics over time. Normally, I would use Python/Matplotlib to script multiple plots (e.g., response times across test scenarios, error rate vs. concurrent users, etc.), but I’m trying to replicate this workflow in Origin with more advanced formatting and automation.
What I’m trying to achieve:
Multi-panel layout: e.g., a 3×3 grid where each panel tracks a different ct-pt test performance metric across test runs. Group Y-series per metric: e.g., grouping response times from different test cases (Login, Checkout, Search) within the same panel. Unified legend across all plots for cleaner reports. Cloneable templates: so I can reuse the same visualization format for different test datasets without rebuilding everything manually. Automated scripting (LabTalk or Python in Origin): ideally I’d like to automate import + plot generation, similar to my Python workflow.
Question:
Has anyone here automated Origin workflows for software performance testing data before? How would you suggest handling multi-metric grouping and unified legends for ct-pt test style reports? Is it practical to use cloneable graph templates with labels (e.g., scenario names like Login/Checkout) as group identifiers?
I think this could also help ct-pt test visualizing testing results in Origin might make studying metrics and patterns easier than raw logs. I’ve also been using practice resources from certshero to map concepts into real data analysis, and adding Origin graphs makes the preparation more hands-on.
Would love to hear your experiences or see example workflows! |
|
YimingChen
1676 Posts |
|
|
Topic  |
|
|
|