Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test what you see #14

Open
andreareginato opened this issue Oct 3, 2012 · 91 comments
Open

Test what you see #14

andreareginato opened this issue Oct 3, 2012 · 91 comments
Labels

Comments

@andreareginato
Copy link
Collaborator

Write your thoughts about the "test what you see" best practice.

@josephlord
Copy link

Largely agree and integration tests can ensure that the desired behaviour is working correctly. Do you see a case for negative tests in the controller to ensure that inappropriate information is not being disclosed (potentially in other formats)? I've been trying to check many of the security aspects at the controller level and the functionality aspects at the integration level.

@tilsammans
Copy link

Very much agree. Controller specs quickly become a huge mess and are very brittle.

@oesmith
Copy link

oesmith commented Oct 3, 2012

+1 to @josephlord

There's definitely a case for using controller tests for bits of the interface you can't see.

If you're having to mixin Rack::Test::Methods to write your spec, you're really writing a controller spec.

@svs
Copy link

svs commented Oct 4, 2012

I don't believe that integration tests are any substitute for controller tests. Here are some reasons for writing proper controller tests

  • The controller code is basically the definition of the API for your app. Do you really want this to not be under test coverage?
  • Not writing tests for controllers means you do not get any of the advantages of TDD in your controllers. This means that your controllers run the risk of becoming bloated and unfocussed.
  • Controllers do specific things, and the test for those specific things should be in the controller. Controllers pass messages, assign data and render/redirect as required. Are you really sure you don't want to test these?
  • "Test what you see" leads to brittle tests because the UI has the potential to change quickly, while the controller logic is much more stable. Coupling your controller test result to your flash messages is an unnecessary complexity

Here's a small recipe for writing painless controller tests. Without the shared examples, it's less than 100 LoC to test all authentication, authorisation, message passing, data assignment and method invocation expectations.

https://github.com/svs/painless_controller_tests/blob/master/spec/controllers/items_controller_spec.rb

@route
Copy link

route commented Oct 4, 2012

I can't believe that integration tests might replace controller tests because integration tests are used to test the interaction among any number of controllers. Generally speaking why we should use integration test to cover just one action in our controller, it is task for controller tests. So write tests for your controllers, write tests for interaction between controllers, this my point.

@tilsammans
Copy link

With controller tests it always seems inevitable to mess with internals, use mocking and stubbing and this ties the test very much to the implementation. For a run of the mill CRUD controller writing a controller spec is too much fanfare in my opinion. I just care about the interaction and the end result and these are handled fine on a higher level. Integration tests are not used to test many controllers, they are used to test the entire MVC stack and writing them for a single controller (a single user scenario actually) is a good thing. When I have the user scenario covered a plain controller spec is superfluous. I agree that they can be useful for unhappy paths.

@andreareginato
Copy link
Collaborator Author

I've to admit that I've wrote this guideline because lately I was writing JSON API and it felt just right.
With more visual UI this idea could change, also if I should get more examples about.

About this need I always think about a rails app. Actually what I do is to collect all common functionalities (JSON format, security, access, errors) as shared examples, as @svs pointed out. The difference is that I do it to check the final result.

But after all those comments I'm getting rather curious. Can you guys give me some real examples where something can't be done with integration tests, but it can be done with controller testing?

Actually my experience with controller test wasn't nice for two main reasons.

  • It was getting a bit messy
  • I had the unpleasant feeling I was duplicating lot of test

The last one in particular, now is gone.

@route
Copy link

route commented Oct 7, 2012

With controller tests it always seems inevitable to mess with internals, use mocking and stubbing and this ties the test very much to the implementation.

You don't have to use mocks if you don't need them, it's related to all kind of tests.

Integration tests are not used to test many controllers, they are used to test the entire MVC stack and writing them for a single controller (a single user scenario actually) is a good thing.

Sorry I didn't mean exactly a few controllers, I meant a scenario.

@route
Copy link

route commented Oct 7, 2012

Can you guys give me some real examples where something can't be done with integration tests, but it can be done with controller testing?

Your question sounds a bit confusing for me. As I think something can be done easy with controller test rather than integration test. For example: testing auth and rights, redirects, redirects depending on headers, sending special headers, before/after filters which can do "invisible" on page work.

@rimidl
Copy link

rimidl commented Oct 8, 2012

+1 to @route, @svs. I think the same. Let me add something.

We shouldn't forget the important idea - "Divide and Conquer.".

We should use integration specs for rough, surface testing. We have to pretend and behave yourself like and usual user, we should think and do the same things.

We should use controllers specs to check details of realizations. This level of specs lets us get more concentration on implementation rather than integration's specs and lets us putty narrow places beyond integration's tests..

Let me note that view's specs may be very helpful too, in some specific cases of course.

@svs
Copy link

svs commented Oct 8, 2012

Small blog post about this topic: http://svs.io/post/32926616364/painless-controller-testing-in-rails

@glennr
Copy link

glennr commented Oct 12, 2012

+1 to svs's approach.

Until recently I was very much in agree-ance with tilsammans, but messy controller specs are a symptom of a different problem with your testing, and more integration tests are not necessarily the solution. Give me a messy set of controller specs over a messy set of integration specs any day of the week please.

If you don't TDD your controller code, then you're doing TDD bad, and you should feel bad.

If you attempt to get coverage through all paths of your controller code using integration specs, then you're gonna end up with a lot of (slow) integration specs. Cue the math: http://www.jbrains.ca/permalink/integrated-tests-are-a-scam-part-1

Also, doing integration tests in Rails generally implies you're using a tool like capybara, but then where do you put your API tests? Not in your capy specs i hope... http://www.elabs.se/blog/34-capybara-and-testing-apis For better or worse, an API can be expressed really nicely in a controller spec.

My approach now: BDD/TDD, but dont get attached to every single integration spec you write - keep only the high value ones and let your unit tests do the rest.

@pepe
Copy link

pepe commented Oct 12, 2012

@andreareginato +1

@Govinda-Fichtner
Copy link

Integration tests with cucumber/capybara usually tend to be much slower compared to good written unit tests... so alone for the quick health feedback they give me about my app I would not do without them... that seems to be an aspect that a lot of people seem to forget lately when they say they are just doing integration tests...

@sdeframond
Copy link

We have to do integration tests anyway, either manually or automatically. We have to do it because it is the integrated product that is billed to the customer. Unit test are merely a convenience for the developers, albeit a very useful one.

Manual integration tests are slower than automated integration tests. So as soon as the application becomes a bit complex it makes sense to have it highly covered with automated integration tests. If those are too slow for your workflow then it is probably a good idea to run them on some CI server. Even then some amount of manual testing is useful.

Unit tests (models and controllers) are very useful to developers because they are faster to run than integration tests and because they encourage good design, but they are redundant.

@Naomarik
Copy link

Disclaimer: I'm a newb, let me know if I'm wrong on any of these points.

If following this advice how does the author intend on testing their API for apps that perform requests for JSON or XML?

Also it would seem a lot faster testing CRUD operations by directly interacting with the controller since you're able to set user login states rather than having to hit your whole webapp.

For instance, in a controller spec you could have

session[:user_id] = FactoryGirl.create(:user).id
post :create, :world => FactoryGirl.attributes_for(:world, :brand_id => brand.id)

Testing this in capybara would have to hit login section of your site, navigate to the page, then fill in each input of your form. This would turn a 200ms test to few seconds.

I think integration tests have their place to ensure your webapp is generally working, but more rigorous tests on what input is acceptable seem to be much more efficient when done in controller specs both in testing speed and reducing code complexity.

@marnen
Copy link

marnen commented Jul 26, 2013

@sdeframond Exactly. Unit tests are less useful than integration/acceptance tests. They're great for models, but not for controllers, where they're too complicated and don't test what you really want tested.

@svs:

I don't believe that integration tests are any substitute for controller tests.

I think you're wrong, as I'll explain below.

Here are some reasons for writing proper controller tests

The controller code is basically the definition of the API for your app. Do you really want this to not be under test coverage?

No. That's why I don't trust it to controller specs.

What do I mean by that? Well, for a user-facing app (i.e. almost all of them), the API for my app is the UI. The only way to test the UI is through integration and acceptance tests. Controller tests are simply useless for this: a controller test can tell you that UsersController#show behaves as you expect, but cannot tell you whether GET /users/1 behaves as you expect. The user cares not a whit whether UsersController#show does the right thing, or whether GET /users/1 invokes UsersController#show; rather, he cares that GET /users/1 does what he expects, whether by means of UsersController#show or TreesController#climb.

In other words, controller tests don't actually test what you care about. They don't test what you see.

(I do occasionally write controller tests for controllers that manage HTTP service endpoints instead of Web pages, but even there I usually find acceptance tests more useful.)

Not writing tests for controllers means you do not get any of the advantages of TDD in your controllers. This means that your controllers run the risk of becoming bloated and unfocussed.

Wrong. I get the advantages of TDD for my controllers by testing what I see through my integration tests. And my controllers aren't bloated or unfocused; it's rare that I have more than about 3 lines of code in any action.

Controllers do specific things, and the test for those specific things should be in the controller. Controllers pass messages, assign data and render/redirect as required. Are you really sure you don't want to test these?

I do want to test these things, but not directly, just as I don't want to test my private model methods directly. They are internal implementation, and as such are sufficiently tested by means of testing the interfaces that I do care about. Test what you see.

"Test what you see" leads to brittle tests because the UI has the potential to change quickly, while the controller logic is much more stable. Coupling your controller test result to your flash messages is an unnecessary complexity

No. If the user cares about the flash message, it should be tested (and if not, don't bother testing it, but focus on whatever the user does care about). That's not unnecessary complexity; it's testing what's important to the user. And in my experience, controller tests are far more brittle than Cucumber acceptance stories.

Here's a small recipe for writing painless controller tests. Without the shared examples, it's less than 100 LoC to test all authentication, authorisation, message passing, data assignment and method invocation expectations.

https://github.com/svs/painless_controller_tests/blob/master/spec/controllers/items_controller_spec.rb

That's beautiful but useless. You're testing that get :show does the right thing, which is completely irrelevant, because you're not testing that you're actually calling get :show where you think you are, at least as far as I can tell.

@marnen
Copy link

marnen commented Jul 26, 2013

Also, one more thought: fast tests are not useful if they don't actually test what you need tested.

@svs
Copy link

svs commented Jul 26, 2013

I am not saying you don't need integration specs. I am saying that they are no substitute for controller specs (and model specs).

People much smarter than I have said it much better than I ever can.

http://blog.thecodewhisperer.com/2010/10/16/integrated-tests-are-a-scam/
http://www.jbrains.ca/permalink/not-just-slow-integration-tests-are-a-vortex-of-doom

I posit that it is impossible to meaningfully cover all your codepaths using integration specs. The complexity explodes combinatorially when you go from one model to two to four to forty. Add in some controllers, throw in some gems and a layer of javascript and you have several million possible cases to test. This is why we push the testing down to the lowest layer possible and at higher layers we only check increasingly high level functionality such as checking parameters passed or alerts received and so on. Do you, in your integration specs, check whether an item got added to the database?

Take the case of checking for uniqueness of an email. It is possible to do this with an integration test. Then you want both name and email to be unique. And no wait, name should be unique only if the user is a premium users....Congratulations, you just spent 15 minutes setting up this test and a minute running it using capybara each and every time when it's so much simpler to do so at the model level. These tests are no way redundant. There is no other way to meaningfully test your app completely* in any sort of reasonable* timeframe.

What do I mean by that? Well, for a user-facing app (i.e. almost all of them), the API for my app is the UI. The only way to test the UI is through integration and acceptance tests. Controller tests are simply useless for this: a controller test can tell you that UsersController#show behaves as you expect, but cannot tell you whether GET /users/1 behaves as you expect.

Agreed. But UI tests are UI tests, router tests are router tests, view tests are view tests and integration tests are integration tests. They all have different purposes. Integration tests are called so because they are only to test that the various components of your application have been integrated correctly. Run the requests through the full stack of the app to test whether routers, controllers, models and frontend clients are all behaving well with each other. An integration test's purpose is not to test validation on models or basic correctness of business logic. Down that road lies madness.

The user cares not a whit whether UsersController#show does the right thing, or whether GET /users/1 invokes UsersController#show; rather, he cares that GET /users/1 does what he expects, whether by means of UsersController#show or TreesController#climb.

The user cares not a whit whether you use Rails or PHP or have a team of pigeons randomly choosing answers. But the developers do care. Tests are the best form of documentation there is. When a unit test fails it gives an error message that can lead developers to solve the problem quickly. When an integration spec fails it just say "Expected page to have content 'success' but it does not". Great. Now go looking for the error somewhere else.

In the end, of course, follow whatever testing methodology gives you best results. It's just that I can't see how one is to test every aspect of my system using only integration tests. Or are you suggesting that we let go of testing the whole system and concentrate only on the "important" bits?

@marnen
Copy link

marnen commented Jul 26, 2013

@svs:

I am not saying you don't need integration specs. I am saying that they
are no substitute for controller specs (and model specs).

And I'm saying you're wrong.

People much smarter than I have said it much better than I ever can.

http://blog.thecodewhisperer.com/2010/10/16/integrated-tests-are-a-scam/

I have read that article. He doesn't back up his assertions, and doesn't
provide any of the sort of tests he's talking about. For these reasons, I
do not think it is worth taking that post seriously.

http://www.jbrains.ca/permalink/not-just-slow-integration-tests-are-a-vortex-of-doom

I'll look at that.

I posit that it is impossible to meaningfully cover all your codepaths
using integration specs.

True. And the same is also true of unit specs, at least once you try
for anything past C0. No real difference here.

Do you, in your integration specs, check whether an item got added to the
database?

Generally not directly (I save that for model specs, which are technically
integration specs but are not usually thought of that way in Rails).
Instead, I check that the item shows up on the list of items.

Take the case of checking for uniqueness of an email. It is possible to do
this with an integration test. Then you want both name and email to be
unique. And no wait, name should be unique only if the user is a premium
users....Congratulations, you just spent 15 minutes setting up this test
and a minute running it using capybara each and every time when it's so
much simpler to do so at the model level.

I do this sort of testing at the model level, and I'm not sure what gave
you the idea that I don't.

I'm pressed for time right now and will attempt to respond to your other
points later.

@marnen
Copy link

marnen commented Jul 26, 2013

Just looked at the jbrains.ca link. It's interesting, but appears to be talking about a different type of use of integration tests, in a different programming context. I don't think it brings much useful (pro or con) to the present discussion—at least I don't see the relevancy to Rails-style tests.

More later.

@Dariusz-Choinski
Copy link

It seems to me that relying only on integration testing is a bad practice. Integration tests always use views, what about features that do not have views, that are happening in the background?. If you are testing only what you see, it means that you do not test what you do not see. There are many such things such as e-mail scheduling, payment, data transfer, file manipulation, format conversion, etc... that can be overlooked in testing, if you believe that controller testing is not necessary. Controller testing and integration testing are a parts of application testing. It is not well if you cut a part of application and throw it away, saying "O! I don't care about it". Can be happen that you release the application with plenty of bugs, but you are not aware of it, because you didn't care about it. I agree with opinion, that in case of CRUD actions, integration testing is enough, even better in simply cases, but I disagree with opinion that integration testing is enough in any case.

@marnen
Copy link

marnen commented Jul 27, 2013

@Dariusz-Choinski:

Integration tests always use views, what about features that do not have views, that are happening in the background?

If they are features (for the user), there will always be some sort of output, be it HTML views, e-mail, JSON, or whatever. You test that output in your integration tests; it's not just views.

If there is no output, then the user never sees them, so there is no direct value for him, and they're probably best either unit-tested in isolation or (probably better) integration tested as part of whatever they support that is of value for the user.

You're complaining about an overly narrow type of integration testing. No one is actually advocating that, AFAIK.

@marnen
Copy link

marnen commented Jul 29, 2013

The rest of my response to @svs:

Agreed. But UI tests are UI tests, router tests are router tests, view tests are view tests and integration tests are integration tests. They all have different purposes. Integration tests are called so because they are only to test that the various components of your application have been integrated correctly. Run the requests through the full stack of the app to test whether routers, controllers, models and frontend clients are all behaving well with each other. An integration test's purpose is not to test validation on models or basic correctness of business logic. Down that road lies madness.

Mostly agreed. Validation of models lives in the model and can be easily verified by unit testing the model. To the extent that business logic lives in the model or other atomic object, same answer. I'm not disputing this point. I do unit-test my models quite extensively. That makes sense: they're "heavy" objects with lots of state and behavior. Properly skinny controllers are not (see more on this point below).

However, if for some reason I could only have one kind of tests, I would trust my integration tests to show me that the application worked correctly as a whole. I would not trust my unit tests to do so, because they cannot.

The user cares not a whit whether you use Rails or PHP or have a team of pigeons randomly choosing answers. But the developers do care. Tests are the best form of documentation there is.

But they should document interface, not implementation. When the developers need to know about implementation, they either inspect the code or write separate developer docs. They don't (or shouldn't) look at the tests as implementation documentation: that's very much not the point of tests. This is why refactoring works so well with tests: the tests specify the interface, and you can change the implementation as you like with the confidence that the interface hasn't been broken.

When a unit test fails it gives an error message that can lead developers to solve the problem quickly. When an integration spec fails it just say "Expected page to have content 'success' but it does not". Great. Now go looking for the error somewhere else.

That has not been my experience at all. Even in quite complex applications, I can generally tell exactly where the error is from my integration test failures. When I can't, it has always been due to complex interactions between controllers and models that wouldn't have been caught by controller unit tests anyway!

Perhaps your integration tests are written at a different level of granularity if you're having a problem in this respect?

It's just that I can't see how one is to test every aspect of my system using only integration tests. Or are you suggesting that we let go of testing the whole system and concentrate only on the "important" bits?

I don't test every aspect of my system, I suppose—that's not feasible in a system of any complexity. Rather, I test every aspect of my system that the user cares about. If something has not been specified in a user story, then I believe its behavior is undefined, so I'm not going to write a test for it. (Before you accuse me of being slipshod here, note that part of my process of writing user stories is asking the user detailed questions about what the system should do in every case that the user or I can think of. Those answers get written into tests.)


Another reason that I don't find much utility in writing Rails controller tests is pure practicality. I follow "skinny controller, fat model" as a matter of course, which means that my controller actions are simply brain-dead glue code. Anything complex in a controller action gets refactored into a model method. This means that my controller actions are typically no more complex than

def index
  @user = session[:current_user]
  @posts = @user.posts.published
end

There's nothing interesting to test here in isolation: if you try, you'll wind up mocking or stubbing so much that it isn't really a test. So you test Post.published, and you test the integration. At least, that's how it looks to me. I see how to write a unit spec for this action; I just don't see how to write a meaningful unit spec for it.

I'd actually say—and I hadn't thought about it this way before—that if your controller methods benefit from unit tests, then that probably means your controllers are too fat.

@josephlord
Copy link

@marnen Where do you place (and test) access controls? Surely controllers (and their tests) are the right place to put the security boundary and certainly to test it.

@marnen
Copy link

marnen commented Jul 29, 2013

@josephlord What sort of access controls are you referring to? Authentication? Authorization? Something else?

@marnen
Copy link

marnen commented Jul 29, 2013

Actually, I guess I can expand on that before @josephlord replies.

First of all, anything that affects the user's experience needs to be in an acceptance/integration test so that it's verified to be present where it's supposed to be. That's a given.

Now, as far as authorization goes, I like putting as much of that as possible in the model (Cancan's approach is good here): surely the model (or a specific Permission model) should be smart enough to determine who's allowed to look at it. I think that it makes sense to either have something like if @user.can? :read, @post, if @post.reader? @user, or even if Permission.for(@user, @post, :read), so that all the logic is in model methods, and is tested at the model level. At that point, I don't really see what a controller test gets you that hasn't already been covered by the model and integration tests: the controller is just calling model methods anyway, and has no logic of its own (except perhaps a redirect if @user.can? :read, @post turns out to be false).

Authentication does probably belong more in the controller than authorization, but here again, I'm not sure the controller methods do much that's interesting to test in isolation.

Does that answer your questions?

@josephlord
Copy link

@marnen Pretty much answers it. I would have the model knowing who can access specific objects but wouldn't expect it to check who the current user is on requests as I think that is a controller job. I also allow different parameters based on who the user is at times (e.g. setting password and password_confirmation are only allowed for the current user).

These functions should be tested in the controller (particularly the unauthorized cases) as a malicious actor could bypass the HTML form tested by the integration testing and insert additional parameters.

@marnen
Copy link

marnen commented Jul 29, 2013

@josephlord:

I would have the model knowing who can access specific objects but wouldn't expect it to check who the current user is on requests as I think that is a controller job.

Exactly. The controller passes the user to the model method as a parameter.

I also allow different parameters based on who the user is at times (e.g. setting password and password_confirmation are only allowed for the current user).

So your authorization logic actually has something like

class User
  def can_change_password?(user)
    if user == session[:current_user]
      # ...
    end
  end
end

?

That's a bit tough to know what to do with; I think I'd call it poor MVC myself, since the model normally shouldn't be touching the session (unless you have a model like Authlogic's UserSession class, whose sole purpose is to abstract the session features). I think I'd probably do

class User
  def can_change_password?(user)
    user == self
  end
end

which is easier to test at the model level and keeps MVC layers more separate.

These functions should be tested in the controller (particularly the unauthorized cases) as a malicious actor could bypass the HTML form tested by the integration testing and insert additional parameters.

Controller testing won't actually help here, since it does not guarantee that you're testing the action that the malicious actor would be requesting: the malicious actor is not requesting a controller action, but rather a URL. So do this as an integration test by driving curl or something.

The fundamental point is this: users request URLs, not controller actions, and therefore testing a controller action cannot, by itself, verify that the user will see what's in that action, because it does not test that the user ever gets to that action from a given URL.

@josephlord
Copy link

@marnen Actually my user controller (using Rails4/strong_parameters) has this logic:

    def model_params(newModel = false)
      if @user_user == current_user || newModel
        params.require(:user_user).permit(:loginname, :displayname, :email, :password, :password_confirmation)
      else
        params.require(:user_user).permit(:loginname, :displayname, :email)
      end
    end

The controllers will in other places ask the model if the user can take an action before doing it. The model doesn't ever know the current user and doesn't get passed to the model (except where that is really the subject of the operation).

Controller testing won't actually help here, since it does not guarantee that you're testing the action that the malicious actor would be requesting: the malicious actor is not requesting a controller action, but rather a URL. So do this as an integration test by driving curl or something.

Actually it is the other way round. Someone may find a way to access the controller by URLs that you don't expect (different format, different request type etc.) but they need to hit a controller at some point so that is the choke point for requests and the right place for tests to ensure safe parameters and correct access rights. You could test with Curl but that would be less complete (e.g. if Rails 5 adds a new request format), harder and slower.

The fundamental point is this: users request URLs, not controller actions, and therefore testing a controller action cannot, by itself, verify that the user will see what's in that action, because it does not test that the user ever gets to that action from a given URL.

I largely agree for testing the behaviour that you want to see but there is still value in testing the unexpected where you don't really care what the response looks like but you don't want it to succeed.

@marnen
Copy link

marnen commented Mar 26, 2015

Okay. I wanted to ask if the type of tests we write are enough.

I'd say it's unlikely, but not absolutely impossible. How do you test complex model logic?

The reason they chose RSpec, I guess, is to have the ability to test models, such as in the last assertion.

Then clearly, they didn't know enough to make a useful decision: Cucumber steps are generally implemented in RSpec, so that would have been no problem.

They also decided to only test the gui through features specs, as it seemed to be the most important thing to test.

Yes, if you have to only have one kind of test for some reason, that is generally the right kind, because it can test everything, whereas unit tests cannot. But unit tests are often useful anyway; it's impractical to set up an integration test for every permutation of complex model logic.

Basically all tests were written after MVC.

...which could explain the abysmal coverage numbers. Are they at least doing all new development test-first?

@Fryie
Copy link

Fryie commented May 5, 2015

"Test what you see" seems odd to me. In fact, users don't "see" your models either. Yet still you are not advocating that we only write acceptance tests?

The most important problems with acceptance tests are:

  • they don't scale (I cannot acceptance-test EVERY path through my code)
  • they are slow (even if Rails controller tests are slow themselves, they're still slower than an acceptance test - especially on a JS-heavy website)

Of course I write acceptance tests for every user story. But what about error handling? What about "my redis connection died and I cannot enqueue this job"? What about "the third-party API is down and I cannot process payments"? What about invalid or malicious user input, etc.? Testing all of these cases (which may well appear on multiple parts of the site) through the UI doesn't seem practical to me.

If you say that there is no value in unit testing controllers you're basically saying that controllers are not meaningful units within your code (either that, or you're against unit testing in general).

The controller's responsibility is a) to take input from the user and pass it to the app and b) to parse the response from the app and deal with it in a way that makes sense to a user (either by returning some JSON in an API or by setting up instance variables for a view). That's enough responsibility that a unit test makes sense at least in some cases.

Also, on another note, "skinny controller, fat model" is an antipattern IMHO. The more reasonable approach is "skinny everything". And just because classes are skinny, doesn't mean there's no value in unit testing them.

To each their own, I guess, but I see no a priori reason why unit testing controllers should considered to be bad.

@marnen
Copy link

marnen commented May 5, 2015

@Fryie:

"Test what you see" seems odd to me.

Why? It is the only way to know that your application works as expected for the user.

In fact, users don't "see" your models either. Yet still you are not advocating that we only write acceptance tests?

Acceptance tests are the most important tests, because they are the only tests that can show that the application is working as a whole.

I write unit tests to make refactoring easier and to increase the number of execution paths I can test, not because I think they will tell me anything about whether the application as a whole works properly. Unit tests -- even controller unit tests -- cannot give you any assurance at all that the application works as desired for the user. Only acceptance tests can do that.

The most important problems with acceptance tests are:

they don't scale (I cannot acceptance-test EVERY path through my code)

This right here is a big reason to have unit tests. The other big reason is that it's often more efficient to test at a smaller granularity before running acceptance tests.

However, don't delude yourself into thinking that you can feasibly unit-test every path through your code either. That quickly runs into huge combinatorial explosions. C0 plus some common C1 use cases is usually as good as you're going to get.

they are slow (even if Rails controller tests are slow themselves, they're still slower than an acceptance test - especially on a JS-heavy website)

They can actually be quite fast if you use a headless browser. But I'd rather my tests be correct than fast.

Of course I write acceptance tests for every user story. But what about error handling? What about "my redis connection died and I cannot enqueue this job"? What about "the third-party API is down and I cannot process payments"?

Acceptance tests are ideal for this. Utilities such as WebMock are great for simulating error conditions caused by external services.

What about invalid or malicious user input, etc.?

You can use acceptance tests for that.

Testing all of these cases (which may well appear on multiple parts of the site) through the UI doesn't seem practical to me.

It's quite practical. I do it on a regular basis. Read my earlier posts in this thread for suggestions on how.

Any test that simulates user input should involve the UI, I think.

If you say that there is no value in unit testing controllers you're basically saying that controllers are not meaningful units within your code (either that, or you're against unit testing in general).

I think you may have responded to this thread without actually reading it, because I already answered this above. Briefly put, while a controller may be a meaningful unit, in a proper Rails application, the controller should be nothing more than a tiny piece of brain-dead glue code. All the logic should be in models and service objects, so you can't meaningfully unit-test your controllers without mocking all their collaborators -- and at that point, you're testing your mocks, not your controllers.

If your controllers have enough logic in them to benefit from unit tests, then you should refactor them.

Am I against unit testing in general? No. But I don't think it's anywhere near as important as acceptance testing. I'd be willing to omit unit tests for simple models too, if I had suitable acceptance tests (though I don't tend to do this in practice).

The controller's responsibility is a) to take input from the user and pass it to the app

Yes.

and b) to parse the response from the app and deal with it in a way that makes sense to a user

Not as you stated. Marshaling data for the view is the controller's responsibility, but the parsing and processing of user input is supposed to take place in the model layer.

That's enough responsibility that a unit test makes sense at least in some cases.

And that's what's wrong with your controllers. You should create model methods for the processing. Then the controller consumes the results of those methods and passes them to the view.

Also, on another note, "skinny controller, fat model" is an antipattern IMHO. The more reasonable approach is "skinny everything".

"Fat model" as I interpret it means that the model layer is fat. Of course no one class should be fat. But almost nothing should happen in the controller layer. The controller just brainlessly passes data from model to view, while all the logic is in the model layer.

That's what "skinny controller, fat model" really means, I believe. And that is proper Rails MVC, not an antipattern.

And just because classes are skinny, doesn't mean there's no value in unit testing them.

If a class does nothing but pass data back and forth between its collaborators, and contains no logic of its own, then there is nothing to unit test that's meaningful.

And that is what a proper Rails controller should look like.

To each their own, I guess,

Software engineering ought to be based on demonstrable facts. I do not believe that "to each their own" is a phrase that has any place in this discipline.

but I see no a priori reason why unit testing controllers should considered to be bad.

Because it's unnecessary, it's painful to implement, it verifies the wrong things, and it's only useful if you're doing MVC wrong. Add those facts up and the conclusion is quite clear: it's a waste of time and effort. The time would be better spent refactoring logic into the model layer and writing acceptance tests.

@sdeframond
Copy link

Relevant: a very interesting
discussion
between DHH,
Martin Fowler and Kent Beck about what to test and why.

On 05/05/15 23:49, Marnen Laibow-Koser wrote:

@Fryie https://github.com/Fryie:

"Test what you see" seems odd to me.

Why? It is the only way to know that your application works as
expected for the user.

In fact, users don't "see" your models either. Yet still you are
not advocating that we only write acceptance tests?

Acceptance tests are the most important tests, because they are the
only tests that can show that the application is working as a whole.

I write unit tests to make refactoring easier and to increase the
number of execution paths I can test, not because I think they will
tell me anything about whether the application as a whole works
properly. Unit tests -- even controller unit tests -- cannot give you
any assurance at all that the application works as desired for the
user. Only acceptance tests can do that.

The most important problems with acceptance tests are:

they don't scale (I cannot acceptance-test EVERY path through my code)

This right here is a big reason to have unit tests. The other big
reason is that it's often more efficient to test at a smaller
granularity before running acceptance tests.

However, don't delude yourself into thinking that you can feasibly
unit-test every path through your code either. That quickly runs into
huge combinatorial explosions. C0 plus some common C1 use cases is
usually as good as you're going to get.

they are slow (even if Rails controller tests are slow themselves,
they're still slower than an acceptance test - especially on a
JS-heavy website)

They can actually be quite fast if you use a headless browser. But I'd
rather my tests be correct than fast.

Of course I write acceptance tests for every user story. But what
about error handling? What about "my redis connection died and I
cannot enqueue this job"? What about "the third-party API is down
and I cannot process payments"?

Acceptance tests are ideal for this. Utilities such as WebMock are
great for simulating error conditions caused by external services.

What about invalid or malicious user input, etc.?

You can use acceptance tests for that.

Testing all of these cases (which may well appear on multiple
parts of the site) through the UI doesn't seem practical to me.

It's quite practical. I do it on a regular basis. Read my earlier
posts in this thread for suggestions on how.

Any test that simulates user input should involve the UI, I think.

If you say that there is no value in unit testing controllers
you're basically saying that controllers are not meaningful units
within your code (either that, or you're against unit testing in
general).

I think you may have responded to this thread without actually reading
it, because I already answered this above. Briefly put, while a
controller may be a meaningful unit, in a proper Rails application,
the controller should be nothing more than a tiny piece of brain-dead
glue code. All the logic should be in models and service objects, so
you can't meaningfully unit-test your controllers without mocking all
their collaborators -- and at that point, you're testing your mocks,
not your controllers.

If your controllers have enough logic in them to benefit from unit
tests, then you should refactor them.

Am I against unit testing in general? No. But I don't think it's
anywhere near as important as acceptance testing. I'd be willing to
omit unit tests for simple models too, /if/ I had suitable acceptance
tests (though I don't tend to do this in practice).

The controller's responsibility is a) to take input from the user
and pass it to the app

Yes.

and b) to parse the response from the app and deal with it in a
way that makes sense to a user

Not as you stated. Marshaling data for the view is the controller's
responsibility, but the parsing and processing of user input is
supposed to take place in the model layer.

That's enough responsibility that a unit test makes sense at least
in some cases.

And that's what's wrong with your controllers. You should create model
methods for the processing. Then the controller consumes the results
of those methods and passes them to the view.

Also, on another note, "skinny controller, fat model" is an
antipattern IMHO. The more reasonable approach is "skinny
everything".

"Fat model" as I interpret it means that the model /layer/ is fat. Of
course no one class should be fat. But almost nothing should happen in
the controller layer. The controller just brainlessly passes data from
model to view, while all the logic is in the model layer.

That's what "skinny controller, fat model" really means, I believe.
And that is proper Rails MVC, not an antipattern.

And just because classes are skinny, doesn't mean there's no value
in unit testing them.

If a class does nothing but pass data back and forth between its
collaborators, and contains no logic of its own, then there is nothing
to unit test that's meaningful.

And that is what a proper Rails controller should look like.

To each their own, I guess,

Software engineering ought to be based on demonstrable facts. I do not
believe that "to each their own" is a phrase that has any place in
this discipline.

but I see no a priori reason why unit testing controllers should
considered to be bad.

Because it's unnecessary, it's painful to implement, it verifies the
wrong things, and it's only useful if you're doing MVC wrong. Add
those facts up and the conclusion is quite clear: it's a waste of time
and effort. The time would be better spent refactoring logic into the
model layer and writing acceptance tests.


Reply to this email directly or view it on GitHub
#14 (comment).

@marnen
Copy link

marnen commented May 6, 2015

@sdeframond: ooh, that's fascinating (though I wish they'd transcribed it -- I have more time to read than to watch video)! I'll note, however, that DHH apparently thinks hexagonal Rails is a bad thing, while I think it's a path to better use of Rails MVC. (In general I don't think much of DHH these days -- he did a great job with early Rails, but can't seem to see beyond it.)

@Fryie
Copy link

Fryie commented May 6, 2015

Software engineering ought to be based on demonstrable facts. I do not believe that "to each their own" is a phrase that has any place in this discipline.

Something does not become a science simply by someone stating that it is. If you want demonstrable facts, please cite extensive peer-reviewed studies, not just personal anecdotes with thousands of possible confounders. Failing that, I suppose that yes, it is indeed a matter of opinion.

I think we have different ways of writing both applications and tests in general. Most importantly, I do not write tests to verify correctness. In fact, in almost all cases I am much quicker verifying correct behaviour by hand.

The reason I test is to catch regressions, and for that purpose unit tests are as valuable as integration tests. Of course unit tests can be brittle, but so can integration tests ("whoops, I changed the post login message, now I have to rewrite my tests"), so it's all a question of writing good tests anyway.

Not as you stated. Marshaling data for the view is the controller's responsibility, but the parsing and processing of user input is supposed to take place in the model layer.

I disagree. My models should not have to know that they are part of a Rails app (it's bad enough that they're tightly coupled to ActiveRecord, i.e. to the database). I should be able to take my models and reuse them somewhere else; that's not just theory, I've done that before.

The controller on the other hand is the boundary, the entity that translates web requests into business requests and business responses into web responses (including setting up instance variables for the view, rendering a JSON representation, etc). IMHO, this is similar to the approach discussed by Uncle Bob in Architecture: The Lost Years.

@marnen
Copy link

marnen commented May 6, 2015

@Fryie:

Something does not become a science simply by someone stating that it is.

...which I didn't do. I merely said that we should take an evidence-based approach to software engineering, not that it was a science.

Most importantly, I do not write tests to verify correctness. In fact, in almost all cases I am much quicker verifying correct behaviour by hand.

What do you write tests for, then? (Yes, I see you said regressions, but is that the only purpose?) And what do you mean by "correctness" here? (I can think of at least two meanings that make sense in this context.)

You're right that it's generally quicker to run a one-off test by hand than to automate it. However, refactoring and other maintenance activities require repetition of the same tests over and over, to make sure that nothing breaks during the process. And that's where automation comes in.

Also, think about this: the way you make sure your application is correct is by catching regressions. And catching regressions has no purpose except to ensure your application is correct. So what do you mean when you say you don't write tests to ensure that the application is correct? What would that kind of test be, to you?

My models should not have to know that they are part of a Rails app

Agreed. That is completely orthogonal to the idea that the model layer should be where data processing happens. The model can process data received from the controller without knowing where it came from or where it's going -- and it should. The controller should call model methods and consume the return values.

The service objects, OTOH (which are part of the model layer but somewhat different) generally probably do know that they're part of a Rails app, and have less reusability.

The controller on the other hand is the boundary, the entity that translates web requests into business requests and business responses into web responses

That is how Reenskaug-style MVC is designed, as used in Smalltalk and in Cocoa. It is emphatically not how Rails MVC (originally called Model2 MVC, from the name of an earlier implementation of this pattern) is intended to work. There is more than one MVC architecture paradigm out there, and the term is thus a bit overloaded.

Reenskaug MVC doesn't work very well for server-side Web applications, or so I understand. It's unfortunate that the related but different paradigm for Web applications is called by the same name even though it's not the same thing.

The controller in Reenskaug MVC is an intelligent object, responsible for quite a lot of stuff (and thus it makes sense to unit-test it). The controller in Rails MVC should be as dumb as you can possibly make it, simply delegating to intelligent classes in the model layer.

Right now it sounds like you're trying to do Reenskaug MVC in a framework that is designed for a different pattern altogether.

@sdeframond
Copy link

By the way, does any of you know how to do proper integration tests with
asynchronous jobs? I am working with Sidekiq now but information about
other frameworks would be interesting too.

I haven't found better than to test that 1) the job was correctly
enqueued, then 2) the job does what it should when run. This is kind of
a mixed approach between unit and integration testing. I whish I could
write end-to-end integration tests though.

What do you think?

-Sam

On 06/05/15 18:39, Marnen Laibow-Koser wrote:

@Fryie https://github.com/Fryie:

Something does not become a science simply by someone stating that
it is.

...which I didn't do. I merely said that we should take an
evidence-based approach to software engineering, not that it was a
science.

Most importantly, I do not write tests to verify correctness. In
fact, in almost all cases I am much quicker verifying correct
behaviour by hand.

What do you write tests for, then? (Yes, I see you said regressions,
but is that the only purpose?) And what do you mean by "correctness"
here? (I can think of at least two meanings that make sense in this
context.)

You're right that it's generally quicker to run a one-off test by hand
than to automate it. However, refactoring and other maintenance
activities require repetition of the same tests over and over, to make
sure that nothing breaks during the process. And that's where
automation comes in.

Also, think about this: the way you make sure your application is
correct /is/ by catching regressions. And catching regressions has no
purpose except to ensure your application is correct. So what do you
mean when you say you don't write tests to ensure that the application
is correct? What would that kind of test be, to you?

My models should not have to know that they are part of a Rails app

Agreed. That is completely orthogonal to the idea that the model layer
should be where data processing happens. The model can process data
received from the controller without knowing where it came from or
where it's going -- and it should. The controller should call model
methods and consume the return values.

The service objects, OTOH (which are part of the model layer but
somewhat different) generally probably /do/ know that they're part of
a Rails app, and have less reusability.

The controller on the other hand is the boundary, the entity that
translates web requests into business requests and business
responses into web responses

That is how Reenskaug-style MVC is designed, as used in Smalltalk and
in Cocoa. It is emphatically /not/ how Rails MVC (originally called
Model2 MVC, from the name of an earlier implementation of this
pattern) is intended to work. There is more than one MVC architecture
paradigm out there, and the term is thus a bit overloaded.

Reenskaug MVC doesn't work very well for server-side Web applications,
or so I understand. It's unfortunate that the related but different
paradigm for Web applications is called by the same name even though
it's not the same thing.

The controller in Reenskaug MVC is an intelligent object, responsible
for quite a lot of stuff (and thus it makes sense to unit-test it).
The controller in Rails MVC should be as dumb as you can possibly make
it, simply delegating to intelligent classes in the model layer.

Right now it sounds like you're trying to do Reenskaug MVC in a
framework that is designed for a different pattern altogether.


Reply to this email directly or view it on GitHub
#14 (comment).

@Fryie
Copy link

Fryie commented May 7, 2015

While there are options to inline sidekiq so the jobs are run directly (see the docs), I think it's not a particulary good idea.

Basically there are several (not exclusive, but complementary strategies):

  • Test both components in isolation (the enqueuing and the worker itself) - which you already do
  • Test the contract between the two (e.g. with something like pacto)
  • Have a dedicated test environment that loads both components and runs an end-to-end test

The last option is very expensive (and sometimes not feasible, e.g. if you have a delay for the worker), so IMHO it should be restricted to very important cases.
I think this presentation by Martin Fowler about microservices testing is interesting in this regard.

@marnen
Copy link

marnen commented May 7, 2015

@sdeframond:

By the way, does any of you know how to do proper integration tests with
asynchronous jobs?

That's a really interesting question. I'm trying to remember how I've handled this in the past.

I think I would tend to have the job run synchronously in the testing environment; otherwise there's really no way to follow the whole process through. Or (mostly equivalent) somehow get a promise from the enqueued job that resolves when it's done, and test that.

It seems to me that this is the approach I've taken in the past: run synchronously if Rails.env.test?. But I don't have access to that code now to check.

Another approach would be to treat the queue like an external service: mock the results of the enqueued job, then test the job logic elsewhere.

I'm not sure which of these I like better. They all have problems. I'd tend to decide on a case-by-case basis.

@tgaff
Copy link

tgaff commented Jan 24, 2016

Considering how contentious this rule is; I really think it should be removed from the guide as a recommendation.
If the goal of betterspecs is to suggest guidelines based on the collected knowledge, experience and general consensus of RSpec users (on the best ways to use RSpec); then it doesn't seem that we're serving that goal by listing something with so little agreement.

@marnen
Copy link

marnen commented Jan 24, 2016

@tgaff The principle isn't controversial. You'll notice that most of the disagreement comes from people who are new to RSpec, not from experienced RSpec users.

Besides, if it were obvious, there would be no need to recommend it, would there?

@Fryie
Copy link

Fryie commented Jan 28, 2016

I don't want to add to this discussion again, but to state that this principle is not controversial is ... quite a statement. I also just watched a screencast by Gary Bernhard yesterday, where he refactored a big controller test - in the end the controller spec was smaller and more isolated but it was still kept in place.

I'll furthermore refer to this blog post.

@ryanmtaylor
Copy link

ryanmtaylor commented Feb 13, 2017

Testing a controller can fall into multiple categories!

  1. Testing controllers as an integration test, before they form a UI is one thing -- this clearly is not "testing what you see".

  2. Closely related, testing a controller (like an API) because the application has to interface with something is Contract Testing. In certain scenarios you may want to do this, like if another service wants to use your API.

  3. Testing a database write/update won't always fall under "unit testing a model", but when it does you're better testing the model.

  4. Testing a more complex database write (across models) would suggest your program has high level actions that the user (or system) is trying to accomplish. BDD/DDD would say these are features and domain-actions. They directly come from business requirements. You should test business requirements rather than the ins and outs of the controller itself. Test the business requirements these imperative Commands because (a) we must ensure and test that we meet business requirements (b) features are less likely to change than a general controller or integration test. If you're familiar with Facebook's Flux pattern or DDD the idea of Commands as general endpoints that map to a user/system goal may make more sense.

If you do have to test the ins and outs of the controller, you're back to case 1, where you're defining a contract.

Once again: You should unit test, test business requirements (commands), acceptance test, and then integration test -- in that order, because that's the order in which things are liable to change.

@Fryie
Copy link

Fryie commented Feb 14, 2017

@ryanmtaylor Just answer me this question: what is a controller's responsibility? Because before you answer that question you can't answer whether and how controllers should be tested.

My opinion is that Rails controllers by design / common usage do a really, really poor job at adhering to the SRP. Are they supposed to cover business logic? Parameter handling? Directing routes to specific entrypoints?
Different projects will use controllers in different ways. If your controllers are extremely dumb (i.e. they do exactly zero things before piping everything into, say, a Trailblazer operation), then by all means don't unit test them. But that's probably not how 95% of Rails projects are set up. As soon as there is even just one strong parameters method in there, you basically have parameter validation, and then that's logic that should be covered in your tests.

@brunofacca
Copy link

brunofacca commented Jun 22, 2017

IMO this guideline should be updated.

Now I only create integration tests using RSpec and Capybara.

It should probably say RSpec and/or Capybara. I don't think feature specs with Capybara can replace controller specs and/or request specs in most cases. Even if they could, it would result in a slow and likely brittle a test suite.

For instance, Capybara is not suited for testing APIs. You should probably mention request specs, which are suited for API testing and widely used in integration tests as of Rails 5.

The RSpec 3.5 release notes say:

For new Rails apps: we don't recommend adding the rails-controller-testing gem to your application. The official recommendation of the Rails team and the RSpec core team is to write request specs instead. Request specs allow you to focus on a single controller action, but unlike controller tests involve the router, the middleware stack, and both rack requests and responses. This adds realism to the test that you are writing, and helps avoid many of the issues that are common in controller specs. In Rails 5, request specs are significantly faster than either request or controller specs were in rails 4, thanks to the work by Eileen Uchitelle1 of the Rails Committer Team.

@marnen
Copy link

marnen commented Jun 23, 2017

@Fryie:

Are [controllers] supposed to cover business logic?

Explicitly not. That's the model's job, and always has been as far as Rails MVC is concerned. This is nothing new.

If your controllers are extremely dumb (i.e. they do exactly zero things before piping everything into, say, a Trailblazer operation), then by all means don't unit test them.

That's how I strive to structure my controllers (and usually succeed). Anything complex goes into a model or a service object (which is really a special kind of model).

But that's probably not how 95% of Rails projects are set up.

You may be right, but "skinny controller, fat model" has long been the Rails ideal. Therefore, if you're doing Rails right, controller unit tests are generally not meaningful. If you need controller tests, you're probably doing Rails wrong by making your controllers too fat.

As soon as there is even just one strong parameters method in there, you basically have parameter validation, and then that's logic that should be covered in your tests.

...which is why I hate strong parameters. That kind of logic belongs in the model layer. (As I've said elsewhere in this thread, I basically proxy my strong parameter logic to the model layer in my applications. That way I work with, rather than against, the framework, but still keep the data validation logic in my models.)

@brunofacca:

I don't think feature specs with Capybara can replace controller specs and/or request specs in most cases.

They can (especially with Cucumber on top) and should, if the controllers are sufficiently skinny. (If they're not sufficiently skinny, that's a problem to fix, not to palliate with more tests.)

For instance, Capybara is not suited for testing APIs. You should probably mention request specs, which are suited for API testing and widely used in integration tests as of Rails 5.

For API testing, I very highly recommend the https://github.com/zipmark/rspec_api_documentation gem.

@brunofacca
Copy link

@marnen:

They can (especially with Cucumber on top) and should, if the controllers are sufficiently skinny. (If they're not sufficiently skinny, that's a problem to fix, not to palliate with more tests.)

I'm referring to applications with extremely skinny controllers. Let me rephrase: Feature specs with Capybara may replace controller/request specs, but it will result in a slower test suite. Additionally, finding the problems behind test failures should take longer (lower-level tests help to pinpoint issues) and the fact that your tests rely on DOM elements will likely increase brittleness.

For API testing, I very highly recommend the https://github.com/zipmark/rspec_api_documentation gem

I did not know that gem. Looks great, thank you for the recommendation.

@marnen
Copy link

marnen commented Jun 23, 2017

@brunofacca:

Let me rephrase: Feature specs with Capybara may replace controller/request specs, but it will result in a slower test suite.

So what? The tests will be more correct, especially if the controllers are skinny.

finding the problems behind test failures should take longer (lower-level tests help to pinpoint issues)

That's why we have model unit tests. If the controllers are sufficiently skinny, there is essentially nothing in them that will break, so there is nothing in them to test.

and the fact that your tests rely on DOM elements will likely increase brittleness

The fact that the tests rely on DOM elements increases correctness. I want my tests to break when the DOM changes significantly, because the DOM is what the user interacts with.

@brunofacca
Copy link

brunofacca commented Jun 23, 2017

@marnen:

So what? The tests will be more correct, especially if the controllers are skinny.

IMO you can't enjoy the full benefits of the TDD process with a slow feedback loop. Also, if tests take too long to run, it is easier to loose focus. I believe tests can be correct and fast at the same time.

The fact that the tests rely on DOM elements increases correctness. I want my tests to break when the DOM changes significantly, because the DOM is what the user interacts with.

The DOM may not have to change significantly to break your tests.

Anyway, I suggest we agree to disagree, as this seems like one of those discussions that can go on forever.

@marnen
Copy link

marnen commented Jun 23, 2017

@brunofacca:

IMO you can't enjoy the full benefits of the TDD process with a slow feedback loop.

That's true to a point, which is why we use the model unit tests as the inner loop. Then we use the feature/acceptance tests as an outer loop if they're too slow to run all the time.

Personally, though, I am so convinced of the benefits of test-first development that I'm willing to be pretty patient for slightly slower but correct tests to run.

Also, if tests take too long to run, it is easier to loose focus.

I use Guard and other tools that help track that focus for me.

I believe tests can be correct and fast at the same time.

That's the ideal, but if I have to choose one of the two, I will choose correct over fast nearly every time—within reason: I don't recompile the Ruby interpreter or reinstall my gems for every test, after all. :)

The DOM may not have to change significantly to break your tests.

I want my tests to break if the DOM changes in such a way that the user would have a different experience. If they do not do so, then the tests are not sufficiently exercising the UI.

OTOH, if the tests break frequently when the DOM changes in trivial ways, then the tests are too picky. But it's easy to write tests that work through the UI and are just picky enough. The way to do that is to write tests that think like a user: look for display text, not classes or IDs, don't pay too much attention to element order if the user wouldn't, and so on.

Anyway, I suggest we agree to disagree

I don't do that; I think it's sloppy. This is a field where facts are verifiable, which means that if we disagree, then probably at least one (more likely both) of us has more to learn.

@brunofacca
Copy link

brunofacca commented Jun 23, 2017

@marnen:

I don't do that; I think it's sloppy. This is a field where facts are verifiable, which means that if we disagree, then probably at least one (more likely both) of us has more to learn.

The fact that facts are verifiable in this field does not stop some opinion-based discussions from going on "forever", such as: a) The mockists vs. classicists discussion; b) Extracting model code to concerns vs. service objects in Rails (DHH defends the former); c) Which programming language is best suited for a specific task. I believe our attachment to specific points of view and the fact that most people don't like being proven wrong (even though it provides a great opportunity for learning) tends to skew our perception/interpretation of verifiable facts. I believe they call it "confirmation bias". Anyway, that's a whole other (philosophical) discussion.

I agree that, as long as we enter this kind of technical "debate" with an open mind, we can learn a lot. I sure learned from our discussion, as I'm relatively new to testing and striving to learn as much as I can about the "best practices". In that spirit, thanks for a good talk :)

@brunofacca
Copy link

@marnen: I had given some thought to the points discussed in the previous comments and tried out your approach while developing a small Rails app over the last few weeks. I did not write any controller tests for web views. I have changed my mind and now agree that feature specs can replace controller specs in web apps if the controllers are skinny enough (so if a feature spec fails due to a controller problem, the cause would be quick to pinpoint). After all, everything that we may test in a controller spec is also tested in a well-written feature spec. Although feature specs are much slower than controller specs, our apps need them for UI testing anyway, so the slowness is inevitable.

@marnen
Copy link

marnen commented Aug 16, 2017

@brunofacca:

Although feature specs are much slower than controller specs, our apps need them for UI testing anyway, so the slowness is inevitable.

That's basically my view. I'm glad it worked out in practice for you!

@aesyondu
Copy link

aesyondu commented Apr 2, 2020

I don't want to add to this discussion again, but to state that this principle is not controversial is ... quite a statement. I also just watched a screencast by Gary Bernhard yesterday, where he refactored a big controller test - in the end the controller spec was smaller and more isolated but it was still kept in place.

Not sure if it's just me, but the link provided now resolves to a porn site, http://solnic.eu/.*. So be WARNED.

@marnen
Copy link

marnen commented Apr 2, 2020

@aesyondu Yikes! Looks like Piotr Solnica's site is now at http://solnic.codes, for what it's worth.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests