Whether you call it “UAT” “regression testing” “integration testing” or something else, all I can say is, it’s grueling and it’s REALLY important to do a good job (as good as possible within the constraints of reasonableness). The Big Work Project (which I wrote about at the previous link) went through a two-week UAT phase, and I wanted to write up some of what I’ve learned. These are provisional field notes, because I know that we have a lot more learning ahead of us!
For readers who don’t use Salesforce or a similarly structured database/software product, this post might be a little boring. But if you haven’t asked yourself “is this ready for action?” before rolling out a new system or tool, I think there are tidbits in here that could help you ask AND answer the question! I’d love to hear feedback about how to incorporate testing into google sheets, google forms, survey tools, and other tools that are at your fingertips!
About the T Swift gifs… idk what to tell you. I don’t really listen to her music, but her gifs and memes perfectly encapsulated my feelings about UAT throughout the process, so I had to go all in and include them here.
complex? build?
Complexity is definitely in the eye of the beholder and everyone’s complexity spectrum is different. When I got my amazing day job, I didn’t even know things could be as complicatedly customized as they are, and now I know that there are plenty of orgs that go beyond what we do!
The kind of complexity that I’m working with here includes lots of different kinds of custom code, Flows (launched a variety of different ways), forms, mail merge downloads, login Portal for external users, and a lot more. It’s really fun (and sometimes frustrating) to untangle all of the overlapping types of automation and make sure they are all working properly.
What do I mean by build? Well, this is all custom configuration on top of Salesforce that we “build” by creating new features when we need them. Very little in our system works as “Salesforce designed it.” Instead, we use Salesforce like Legos to combine bits and pieces together to build the perfect structure.
mindset
The first thing I want to write about is attitude. I expected UAT to be emotionally challenging (ha, it was!) and I tried to prepare myself by doing some verbal affirmations, asking for support from friends, and having a really solid plan. Why isn’t this talked about more? Because this really did take a toll on my confidence (anticipation and actual) (don’t worry, I am feeling better now) and it was emotionally/egotistically HARD!!!
My friend David brilliantly reminded me that:
- finding Errors is a GOOD THING and we should celebrate that (not blame or hide issues)
- testing is PART OF developing new things, not an ASSESSMENT of the success of the thing
It’s hard because to a large degree, I was testing my (and my team’s) own work. So while finding things is GOOD (from a testing mindset) it can also feel embarrassing/discouraging from a personal standpoint. I had to put my ego aside and admit to my oversights. Talk about humbling!
Another thing that makes UAT hard is utter inability to fix things! As a problem solver, I want to SOLVE the problem, not just identify it. But after 8 months of building/fixing, I knew that at this phase of the problem, identifying was our only job. And damn, not an easy one at that!
Here’s my advice for getting your mindset right:
- remember that YOU are not YOUR CODE* (something wrong with code =! something wrong with you)
- start off each day with affirmations and intention setting
- celebrate finding funky problems – that means UAT is going well
- never UAT alone!
*your code = general category of formulas, automations, integrations, code, or anything custom that you are testing!
on/off
Transparency note: I’m assuming here that testing is happening in a Sandbox that already has data loaded. If you are testing in a scratch org, dev org, or empty sandbox, then you will also need to generate test data, which is outside the scope of this blog post. If you want to learn more about this, check out the Data Generation Toolkit project!
As you prepare to dive into testing, it’s important to make sure that the Right Things Are On and, consequently, the Right Things Are Off. Some areas that are useful to check:
- Contact and User emails – are they real or invalid? Do you have any automations that would accidentally email real people? Should they be Real or Fake? You might need to do a data upload to revise emails in your testing environment so they are appropriately real or fake.
- Org Email Deliverability (this is a setting in Setup). Is it On? Is it Off? Which one SHOULD it be?
- Flows – are they Active? Should they be Active?
- Experience Sites (fka Communities) – are they Active? Should they be Active?
- Scheduled Apex Jobs – Are they set up? What day(s) should they run on?
- Scheduled Flows – What day(s) should they run on? If you reschedule them for testing, make sure you change them back for real!
- Testers – do they have access and training to use the Sandbox?
There are probably a zillion other settings/features that need to be examined before testing begins, but these are some of the ones that came up for us most recently. If you have others, please leave me a comment!
^^^ This is how I felt about somethings working and other things not working. ^^^
to do list
Once testing, you need a clear list of what features to test. Usually, these are called Test Scripts and they can range from extremely detailed to a basic headline. This often comes down to constraints of reasonableness 🙁 and. it certainly did for us. We ended up with about 57 Things To Test which were about as detailed as “log in” “push this button.” Womp womp? Something is definitely better than nothing.
I think the time that you put into generating Test Scripts almost always comes back to you (in a good way). Especially if you are able to enable people who AREN’T YOU to test things by getting the instructions out from under a dusty corner in your brain. However, I can’t say that I know this from experience because I’ve never successfully done it 🙁
By not having detailed test scripts, I noticed that (1) some things might just not get tested; (2) we spent a lot of mental energy coming up with permutations when we could have otherwise spent that time testing; (3) we might not all agree on the expected outcome/might have different unexamined assumptions about how things are supposed to work, which can invalidate our testing results (scarrrrrry).
So even though it’s tedious, I wish I had spent more time on test scripts before UAT kickoff. Here’s an example template for getting started.
My tips for making your test scripts:
- Don’t forget to include reports and dashboards
- Don’t forget to include all profiles and roles (if using)
debugging
What is debugging? Technical term for finding errors by going one step at a time. Sometimes there are debug reports or logs that you can run for detailed look at back end processes.
While I mentioned above that we are testing NOT fixing, sometimes debugging is necessary. Here are the reasons that motivated me to do minimal debugging:
- There was an error that was a blocker (no reasonable workaround)
- I wanted to log the error and the cause (but fix it later)
(Yes, I know I shouldn’t put too much time into this, but I’m me!)
There were several (FREE!) tools that I used which made debugging infinitely easier! Here they are:
- Workbench – my favorite place to run SOQL queries, very useful for testing some code and some Conga features! I have also used it to drill down on object relationships if they were confusing or difficult to access via reports.
- Salesforce Inspector chrome extension – lets me easily see every field for a record without adding them to the page layout! Super useful for checking field updates in automation for fields I don’t usually see, but serve a purpose. This was SUCH a game changer for me – can’t imagine doing this volume of testing without it!
- Failed and Paused Flow Interviews – hidden gem in Setup menu! Learn more about it here. Most useful place to look for specific error messages about flow-based automation.
- Flow debug feature – this is a newer feature in the Flow automation suite and it is really cool! You can pretty easily test Flows “as if” they were created and updated, with an amazing visual guide through the record-triggered flow including which decisions/nodes they hit. It’s not as detailed as updating a record FOR REAL, but it’s still an amazing too. I wish I used it even more!
- Debug log – for really thorny issues. I wrote an article about how to use it here (don’t worry, debug logs have not changed much in 2 years).
timing
Getting the “timing” right with UAT is tricky, and sometimes can feel a bit chicken-and-egg-ish. I found that testing when *most* things were *mostly done* worked well for us (even though I had to scramble to build some stuff so that it was testable!).
We had a UAT period for 2 weeks, including a holiday (which meant -2.5 days) so 7.5 total workdays. Testing longer than that… I would have lost my marbles. Shorter than that would have been difficult to finish everything. You know your schedule best, but this turned out to be the appropriate window for us.
I also organized “touch bases” every day during testing. On A days, we had a 30 minute touch-base. On B days, we had a 90 minute touch-base, which we used for “sychronous” testing (aka screenshare and test/discuss together). For B days, it was really useful to designate a scribe to record issues so that the demo person did not have to do it.
While having daily meetings with the same people is not my normal cadence, it proved to be extremely beneficial for accountability, motivation, momentum, removing blockers, commiserating, and staying connected.
UAT – I did it all the time, sometimes in meetings, and sometimes solo.
recording + celebrating wins
It’s important to have a plan for recording all of your testing feedback before diving in (whether that is a form, a spreadsheet, or a dedicated application… there are many to choose from!). Ours was developed by our consultants and worked very well. It even had cute buttons to push to submit feedback categorized by type. My only wish was a way to log “successful” tests (I wasn’t sure if success meant … 0 feedbacks, or if we should log a feedback for “success”). Personally, I LOVE celebrating things that work, so I recorded feedback for my successes anyway!
At the end of testing, I put together a wrap up email that included metrics for number of tests completed, number of hours dedicated to testing, and number of feedbacks recorded. It felt REALLY good to do this – like we had really accomplished something big. I encourage you to do the same (reach out to me to borrow my template!).
Whether celebrating bugs (as David suggested) or celebrating non-bugs (as my heart wanted to do), I think the real lesson here is that we should celebrate EVERYTHING about taking true ownership of our system! Really hammering out what a system should be doing and making sure that it actually works is a gift that technologists/operations baddies GIVE to our orgs to keep the lights on and the mission going. It’s not glamorous but it can be rewarding with the right tools and the right mindset – or with imperfect tools and striving for the right mindset. We were scrappy but we got through it, and if you’re reading this, I know you can, too!