Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: [M3-8060, M3-8061] - Cypress tests for Linode Create v2 flows #10469

Merged

Conversation

jdamore-linode
Copy link
Contributor

@jdamore-linode jdamore-linode commented May 14, 2024

Description πŸ“

This adds a bunch of new tests for @bnussman-akamai's refactored Linode create flow. The aim is to have parity with our existing test coverage and to add new test coverage for functionality that was previously uncovered.

Additionally, this introduces a new util pattern similar to page objects to help reduce redundancy across our tests, and help improve test development velocity. Building on our existing UI helpers, these utils are intended to perform common interactions on various Cloud Manager pages. Very interested in feedback here -- I've been on the fence on this for a while now because I don't want there to be too many layers of abstraction when troubleshooting our tests.

What's missing?

  • Creating/selecting a Firewall during Linode create (ticket pending)
  • Selecting add-ons (backups, private IP) during Linode create (ticket pending)
  • Create via CLI tests (ticket pending)
  • Error flows (tickets pending)
  • We're using click interactions rather than touch interactions for the mobile tests (some analysis needed to see if this will work, but I think we can do this using real events)

(Sorry for the size of this PR)

Changes πŸ”„

In the tests

  • Renames existing Linode create spec file from create-linode.spec.ts to legacy-create-linode.spec.ts.
  • Adds Linode create end-to-end tests for each major plan type (shared, dedicated, high memory) against new flow
  • Adds integration tests for new Linode create flow against small (i.e. mobile) screen sizes
  • Adds integration tests for VPC selection/creation during new Linode create flow
  • Adds integration tests for VLAN selection/creation during new Linode create flow
  • Adds integration tests for SSH key selection/creation during new Linode create flow
  • Adds integration tests for cloud-init / user data during new Linode create flow
  • Adds linodeCreatePage and vpcCreateDrawer page utils

In src

  • Adds a couple data-qa- attributes to our paper and Linode summary section components
  • Fixes a small formatting issue related to the plan size in the Linode create summary

How to test πŸ§ͺ

I encourage testers to run the new/modified tests using the Cypress debugger via yarn cy:debug. Otherwise, you can run all of the Linode tests:

yarn cy:run -s "cypress/e2e/core/linodes/*"

As an Author I have considered πŸ€”

Check all that apply

  • πŸ‘€ Doing a self review
  • ❔ Our contribution guidelines
  • 🀏 Splitting feature into small PRs
  • βž• Adding a changeset
  • πŸ§ͺ Providing/Improving test coverage
  • πŸ” Removing all sensitive information from the code and PR description
  • 🚩 Using a feature flag to protect the release
  • πŸ‘£ Providing comprehensive reproduction steps
  • πŸ“‘ Providing or updating our documentation
  • πŸ•› Scheduling a pair reviewing session
  • πŸ“± Providing mobile support
  • β™Ώ Providing accessibility support

@jdamore-linode jdamore-linode requested review from mjac0bs, carrillo-erik, bnussman-akamai, a team, hkhalil-akamai and abailly-akamai and removed request for a team May 14, 2024 20:08
}

// Array of common mobile viewports against which to test.
export const MOBILE_VIEWPORTS: ViewportSize[] = [
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious what screen sizes we should be targeting in our tests, and whether we should be testing portrait and landscape orientations. If we parameterize many of our tests against this array, it'll have a pretty big impact on our Cypress Cloud usage if there are a lot of entries.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I agree. Many screen sizes make sense for visual regression but here we're not testing devices, we're testing screen sizes. As I mentioned in my other comment, I think there would be more value in testing devices (a la Browserstack) since we'd be more likely to encounter cross device/cross browser bugs than responsive bugs.

For screen sizes I would keep things to a max of two devices (one mobile, one tablet). My two cents!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jdamore-linode Do we have any analytics on this? It'd be helpful to know what browsers and devices our users rely on when using our application.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely interested in feedback on this and vpc-create-drawer.ts. Some guidelines that I think can help keep these utils helpful without becoming overly complicated (specifically from a troubleshooting standpoint):

  • Like our UI helpers, these are just simple JS objects with functions related to whatever page the object represents
  • The functions only perform interactions; utilities related to finding elements on a page should stay in ui
  • There's no state

Copy link

github-actions bot commented May 14, 2024

Coverage Report: βœ…
Base Coverage: 81.64%
Current Coverage: 81.64%

Copy link
Contributor

@abailly-akamai abailly-akamai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super nice to see all this added coverage! I like the linodeCreate abstraction a lot.

Left some comments to start conversations I found important πŸ‘

mockGetFeatureFlagClientstream();
});

MOBILE_VIEWPORTS.forEach((viewport) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the primary goal of testing different viewports in your opinion?

I usually find that feature quite useful for visual regression tests, however I am failing to find the tradeoff valuable for integration test (tradeoff being coverage VS suite time execution).

By looping here we're testing exactly the same functionality (we're also looping from inside the loop) and not that there's anything wrong with that, but our application is responsive out of the box, meaning that we usually only handle responsiveness via CSS media queries, which usually don't alter functionality.

Again, this is a very important flow so there is value here, but I wonder if there's concern in blindly duplicating runs when not needed in the long term (aka, lemme just use of MOBILE_VIEWPORTS forEach in my new cause it's really convenient and covers everything πŸ˜„). At the very least there may be value in build guard rails by making it a util to which we can add solid JSDocs warnings as to:

  • when it should be used
  • why it should be used
  • when it's not needed at all

Don't get me wrong, we NEED to get better at testing mobile sizes (esp when we conditionally render based on the viewport, ie: CTAs in a table row) and I can see this being really helpful, just wanted to point out potential scaling thoughts!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@abailly-akamai This is a good question and all the points you raised here are extremely relevant (and I'll also add that in addition to the test run length considerations, I'm also kind of reluctant to do this because of how easily and dramatically it could increase our Cypress Cloud usage)

What is the primary goal of testing different viewports in your opinion?

You kinda touched on it, but my main goal here is to reduce the risk of there being some weird styling issue (or other component issue) that would prevent the Linode Create flow (or any other flow) from working on mobile devices/small screen sizes. My fear is that if something were to break for small screen sizes, however unlikely, we have such little visibility into it that an issue could sit around for a while before we even notice it.

Some (perhaps contrived) examples that come to mind:

  • Some element or button that's critical to a flow gets cut off or overflows in such a way that the user is prevented from interacting with it (and therefore can't complete the flow)
  • Some component that's only present on small screen sizes (e.g. plan selection cards) breaks
  • Some element or button behaves unexpectedly with touch interactions vs clicks
    • (This is a moot point right now since the mobile test on this PR still uses clicks, but we have had cases like this; one that comes to mind is a tooltip issue where desktop users could open the tooltip with a mouse hover, but mobile users were prevented from opening it because iirc the touch event was triggering the parent component and not the tooltip button inside of it)

I wonder if there's concern in blindly duplicating runs when not needed in the long term (aka, lemme just use of MOBILE_VIEWPORTS forEach in my new cause it's really convenient and covers everything πŸ˜„)

Yeah, this is a legitimate concern of mine, especially when it comes to our Cypress Cloud usage! And I think if we do go with an approach like this, we should definitely think about whether all these screen sizes in the MOBILE_VIEWPORTS array are actually necessary; we might be able to get away with having only 1 or 2 sizes covered. What's in there now is almost definitely overkill.

(I wonder if we have any analytics telling us what screen sizes are the most common among our users?)

Don't get me wrong, we NEED to get better at testing mobile sizes (esp when we conditionally render based on the viewport, ie: CTAs in a table row) and I can see this being really helpful, just wanted to point out potential scaling thoughts!

Absolutely! I think we should be doing something, but there's a good argument to be made that this might not be exactly the right solution (or perhaps requires some guard rails, like you said). This is something I'll keep thinking about, and I'd love to hear your thoughts if any other ideas come to mind!

Comment on lines +21 to +26
beforeEach(() => {
mockAppendFeatureFlags({
linodeCreateRefactor: makeFeatureFlagData(true),
});
mockGetFeatureFlagClientstream();
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was looking for a PR to abstract/consolidate flag logic/imports but maybe it hasn't been done yet?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This hasn't been done yet, but we do have a ticket! M3-7891

linodeCreatePage.selectImage('Debian 11');
linodeCreatePage.selectRegionById(linodeRegion.id);
linodeCreatePage.selectPlan('Shared CPU', 'Nanode 1 GB');
linodeCreatePage.setRootPassword(randomString(32));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This abstraction makes sense! Especially cause we often have to use the create flow to test adjacent/dependent features (just built one for PGs).

One thing I find the most difficult when working in e2e tests (that is also true in the src CM codebase but I know it better) is knowing the utils/abstractions available to me. I wonder how we could go around providing developers summaries of those. My first thought was to start building MD files in the feature root. Right now that could just be read, but eventually we could aggregate them into docs. ex:

// packages/manager/cypress/e2e/core/linodes/helpers.md

Linode Helpers

| Helper  | Description | Methods | Example |
| ------- | ------- | ------- | ------- |
| linodeCreatePage | a util to populate linode create flow data | setLabel, setImage, ...  | linodeCreatePage.setLabel('My Label') |

Obv it would be amazing to also find a way to self generate cause this would take quite a bit of work. Any idea on how to generate an API of sorts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discoverability is a huge issue! (Azure and I have even discussed this a bit as she's getting acquainted with the tests and Cloud Manager source)

Weeks back I did a little bit of googling and found vitepress-jsdoc, and thought it would be really cool if we could generate reference docs for our API-v4 client library and integrate them with the rest of our Vitepress docs, but doing the same for our utils could be really useful too. (Mind you I did absolutely no work to see if this is technically viable, it was just a spur of the moment idea I never followed up on)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ohhh that looks promising. I'll look into this one in a a new task ❗

expect(expectedVpcInterface['vpc_id']).to.equal(mockVPC.id);
expect(expectedVpcInterface['ipv4']).to.be.an('object').that.is.empty;
expect(expectedVpcInterface['subnet_id']).to.equal(mockSubnet.id);
expect(expectedVpcInterface['purpose']).to.equal('vpc');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Full disclosure, I found myself silly writing those assertions in my recent test, cause I felt like testing a payload I explicitly mocked above a bit tedious (was following other examples). Curious to know what primary value yo find in those.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually pretty handy since we're testing the outgoing request payload, not the mocked response -- so the data in the request is the real data that Cloud would actually be sending to the API, and we're confirming that Cloud is making the correct request given the choices the user made in the UI.

For example, when I initially wrote the assertions for the user data test, there was a failure related to the way the user data was being encoded (which @HanaXu had caught). Once Banks fixed that and merged his PR, the test began passing with that fix in place πŸŽ‰

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice! thx for confirming

}

// Array of common mobile viewports against which to test.
export const MOBILE_VIEWPORTS: ViewportSize[] = [
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I agree. Many screen sizes make sense for visual regression but here we're not testing devices, we're testing screen sizes. As I mentioned in my other comment, I think there would be more value in testing devices (a la Browserstack) since we'd be more likely to encounter cross device/cross browser bugs than responsive bugs.

For screen sizes I would keep things to a max of two devices (one mobile, one tablet). My two cents!

Copy link
Contributor

@bnussman-akamai bnussman-akamai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Joe! πŸŽ‰ Having this coverage is going to be super nice.

Would it makes sense to make a create folder? I think it would help me run only create related tests in the debug UI

Screenshot 2024-05-16 at 12 04 08β€―PM

I like the idea of running the tests on a mobile viewport but maybe we should just do a small number of runs. (maybe one portrait iPhone and one portrait tablet run? I could go either way on including landscape)

Copy link
Contributor

@hkhalil-akamai hkhalil-akamai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lots of great discussion here @jdamore-linode @abailly-akamai. I think these tests look great -- both the page util and breaking up the spec into multiple files really improve organization.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this index file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hkhalil-akamai Not really! I was initially going to have that export an object containing each page helper (like we do for the UI helpers), but didn't end up doing that since there isn't really a need to import a lot of those at once the way you might want to with the UI helpers.

It might still be convenient to have the index so each page helper can be imported from there (sort of like we do with the factories) but I'm pretty neutral on it!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the abstractions here and in vpc-create-drawer.ts. I think they really help readability in the tests.

@jdamore-linode
Copy link
Contributor Author

Would it makes sense to make a create folder? I think it would help me run only create related tests in the debug UI

@bnussman-akamai I think this is a good idea! My only reluctance is that that soon we'll have other teams running our tests, at which point we may not be able to easily move/rename test files without breaking their automation or impacting their metrics -- so would we be comfortable with a situation where the Linodes folder is the only folder that's organized this way? Or would it make more sense to reorganize everything at once?

(P.S. you can search for create-linode in the UI to get just the create tests right now, but that's more of a happy accident since we don't really have or enforce any naming conventions for our specs. Somewhat related, but you might be interested in the tags POC since it's all about improving our ability to selectively run tests)

@bnussman-akamai
Copy link
Contributor

I think the organization is up to you, I trust your judgement! The extra folders would be helpful in some small cases. I don't think a full reorganization is urgently needed. @jdamore-linode

Copy link
Contributor

@AzureLatte AzureLatte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All tests passed!

βœ”  All specs passed!                        31:26       60       60        -        -        -  

✨ Done in 2009.32s.

@jdamore-linode jdamore-linode added Approved Multiple approvals and ready to merge! and removed Ready for Review Missing Changeset labels May 23, 2024
@jdamore-linode
Copy link
Contributor Author

Merging despite test failure: test failure is unrelated to Linode create flow and was caused by a time out waiting for a Linode (that was created via the API) to boot. May have to reevaluate our timeout length for Linode create operations going forward, but either way not an issue for this PR

@jdamore-linode jdamore-linode merged commit 98f7615 into linode:develop May 23, 2024
17 of 18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Approved Multiple approvals and ready to merge! Linode Create Refactor
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants