Hi Mike, great question and one that has come up from several customers in conversations I have had recently. My first answer is to plug the upcoming On-Premise User Group, where the technical track should be including a discussion on this topic. Also being discussed is the service pack release cadence, which is sort of the other side of your question. This meeting is being held August 24th in NYC, and you should be receiving an email about it shortly.
Back to the real question. Most customers would answer "manual regression testing", due to the technical complexity and licensing issues with using various 3rd party tools. Of course there is also the issue of Ariba's "dynamic HTML tags" that throw additional technical complexity into the mix. There are a good amount of customers who ascribe to "full regression testing", meaning that their policy is to test every single SP-introduced change, whether they use that feature or not, and then run through all legacy test scripts to ensure current configurations and customizations survived the SP process. But I've also seen plenty that "rip and run" (so to speak) by applying the service pack, testing a few key flows, plus the individual targeted fixes, and then pushing through their various environments.
Another common answer is "LoadRunner" - but that really isn't regression...it's "Load" testing of course for performance analysis.
Lastly, the two suggestions that have come up recently are HP Quick Test Pro (part of the same HP suite as LoadRunner), and also WinRunner. Part of the User Group conversation will hopefully steer towards the various automation tools and what capabilities customer attendees use (or want to use), and insight into what Ariba uses internally. The agenda isn't set in stone yet, but that's the goal.
Ariba would love to hear more about this, both on Exchange and at the On-Premise User Group, so I invite all in the community to respond.
Dave thanks for sharing your perspectives and what you are seeing at other customers. For us, the challenge comes down to managing our resources to accomplish the priorities that the business sets for the year. Depending on the year, the business may have a number of changes they want to implement, leaving less time for Service Pack upgrades and the associated regression testing. Ideally, it is about achieving a level of comfort with the amount of testing we feel we need to do for a SP upgrade and the resources required to complete the testing. To the extent that we can automate more of the testing, or agree to a smaller set of regression tests, we might be able to adopt more service packs in a given year. I would love to hear if any other customers are having success with automated testing tools. We do use Loadrunner for performance testing, but don't find it to be a fit for regression testing.
We bundle SP's but do one release a year. We test it completely in multiple test environments. Currently, we are encounting issues on SP14 that's unresolved for past 2 months. SP 14 has been released a few months ago and should be relatively bug free but not in our case.
What it means is that each environment is different and usage of functionality may vary. Without proper regression testing one may have production issues.
I am open to hearing alternate opinions.
We're in a similar situation to you Michael in that we're lucky if we can apply one SP a year due to the level of regression testing that is required, and yes this is manually regression testing as we haven't found a way to automate regression testing (echoing Dave's point that automation is for performance).
Although we're heavily customized, it's typically not our customizations that cause issues when applying SPs. Typical SPs end up having 20, 30+ Hot Fixes. We find that more times than not the Hot Fix is needed to fix something the SP broke or resolve functionality issues with SP New Features (i.e.: scoped Delegation). So full regression testing is always needed just to test the baseline OOTB Ariba code. We talked about this at LIVE last year, and many of us feel that the 3 month turnaround time on SPs is one of the culprits here in that there is greater emphasis on releasing SPs every 3 months than there is in quality controlling the SP.
For organizations that can only fit in one SP a year, we feel that your best bet for project planning is to choose an SP that is 2 to 3 SPs behind the currently released SP as these SPs will have already been updated with Hot Fixes. For example SP15 Buyer has 31 Hot Fixes, yet the current SP is SP19. SPs 19-16 don't have a large amount of Hot Fixes yet, most likely indicating that few customers have applied those SPs based on the trending number of Hot Fixes. In this example SP15 is the tried and tested SP you'd want to go with if you don't have flexibility in your timelines. If your timelines are more flexible, you can go with the latest SP, but plan an extra cycle or 2 to apply Hot Fixes as you and other customers encounter bugs.
I just wanted to give you my perspective as I have worked on a number of service pack deployments for different customers and have worked through regression testing with them.
From my perspective, the level of regression testing that should take place is pretty much up to the comfort level of the customer and how complicated and customized their system is. As Dave Leonard stated earlier, our Engineering department does a level of regression testing (but nothing is ever perfect when is comes to testing), so there are errors that can be encountered once the new service pack is applied.
What I have told customers in the past in terms of regression testing is to mostly focus on their "normal" operating activities. If you use Buyer 9r1, then make sure that the requisition process is working properly and that purchase orders are being generated. Many times customers have regression test scripts that have been created earlier in the deployment process and we encourage them to work through those.
In terms of the list of overall issues that are corrected in the SP, we usually review the list of change requests that have been corrected in between the service packs (since you said in your post, you usually can get one SP deployment in per year, so each service pack deployment will contain a number of service packs worth of fixes). When we review the list of fixes, we determine if the fix is applicable to the customer and the level of importance of the fix (High/Medium/Low). We then review the list with the customer and then determine if our classifications are correct (or make necessary changes). We then make sure that the customer focuses their attention on the High/Medium items (and time permitting, then take a look at the Low items).
I would focus regression testing in the following areas:
- If you have prepared regression test scripts prepared for your normal operating activities, work through those and ensure that you normal actions are not affected in any way
- If you have have integration with another system (SAP, Oracle, etc...) make sure that you test in those areas and make sure that everything is working properly (for instance make sure that you are generating purchase orders, etc...)
- If you have opened service requests for issues that are supposed to be resolved in this service pack, make sure that you test those (since you reported those issues, they probably came up during an end user normal operating activities)
- Work your way through the issues on the CR list that are the most important and have the biggest impact on your system
I hope that this helps out and good luck in the future with any SP Deployements!
I would like to know if any of the customer is using Automation Testing for both Internal Customization Enhancement releases and SP upgrades, any suggestions on the open source automation testing tools