Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I could get into an even longer discussion on this, but basically when I brought up UI automation teams my advice was:

Use granular element-oriented functions (i.e. loginButton.click() or fillForm(name, pw) type stuff for the very small part of the very few tests that were specifically exercising that portion of the UI.

Those you probably define traditionally for POM, as methods of the page (or functions in the page module depending on the language).

Use result-oriented functions (logIn(), registerNewUser()) whenever it's "travel" i.e. things you do to get to the start of your scenario, or a setup/cleanup task.

Those you do not keep with your pages, they live in modules organized by task or result. Plus they have to work from anywhere. They can leave you somewhere, if that's their defined result, but they should be callable from any UI state. By the same token, tests shouldn't assume how they used the UI to get there, again, unless that was defined as their result.

In other words, they're functions: black boxes. The biggest point there was "you can't change this function without preserving that contract, and you can't assume anything but that contract."

The advantage is that if you could wave a magic wand for setup, travel, or cleanup and get the result, the test would still work and be valid. IOW, you can select the most robust and direct way to accomplish those things, even going completely around the UI with cookie injection or whatever.

What most other teams I've had visibility into tend to do differently than I advised is they'll use the granular element-oriented POM functions everywhere except fixture setup/cleanup. They don't have the "travel" concept of things you have to do on the way to your scenario start, and they include that in the test scenario itself with granular element calls.

And travel is really all setup. But some reason when it's "set up yourself via login, option selection, loading a file, etc" people's thought process goes out the window and they think that all needs to be strung together in UI like a user would do it. But intelligently separating out the very small bit of "specific UI manipulation that causes a state change + verification of that change" that is the test from everything else in the scenario that is setup/travel/cleanup gives you much more maintainable tests.

Or even when they do separate them out, they're not really "result-oriented" functions. Instead they're "flow-oriented" macros that you couldn't replace with a magic wand, because the meat of the test assumes intermediate UI flows they performed rather than just end state, and they're written to be strung together in some coupled (and usually undocumented) way.

Then you have the systems that try to use the same functions for setup/cleanup and testing, caught in between the need for granularity and robustness. Those tend to get extra "doItThisWay" flags on their functions and stuff really goes to hell.

Gotta keep 'em separated!

TL;DR I agree with you, and even a few steps further.



That sounds similar to the Screenplay pattern [1]

This has the concept of actors, abilities, interactions, questions, and tasks. This allows good separation of concerns as well as much more user-focused tests.

[1] https://serenity-js.org/handbook/design/screenplay-pattern.h...


Oh, wow! I had been trying to develop a pattern like this out a few years ago, ~2016 (at the time, called Action-Flow), but ended up shifting out of test as a primary focus before I could put polish on it and make it cohesive enough to publish something.

I hadn't realized there was prior art to look at or potentially clone by mistake. I wonder how old this pattern is.

Thanks for showing me!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: