New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test what you see #14
Comments
Largely agree and integration tests can ensure that the desired behaviour is working correctly. Do you see a case for negative tests in the controller to ensure that inappropriate information is not being disclosed (potentially in other formats)? I've been trying to check many of the security aspects at the controller level and the functionality aspects at the integration level. |
Very much agree. Controller specs quickly become a huge mess and are very brittle. |
+1 to @josephlord There's definitely a case for using controller tests for bits of the interface you can't see. If you're having to mixin Rack::Test::Methods to write your spec, you're really writing a controller spec. |
I don't believe that integration tests are any substitute for controller tests. Here are some reasons for writing proper controller tests
Here's a small recipe for writing painless controller tests. Without the shared examples, it's less than 100 LoC to test all authentication, authorisation, message passing, data assignment and method invocation expectations. |
I can't believe that integration tests might replace controller tests because integration tests are used to test the interaction among any number of controllers. Generally speaking why we should use integration test to cover just one action in our controller, it is task for controller tests. So write tests for your controllers, write tests for interaction between controllers, this my point. |
With controller tests it always seems inevitable to mess with internals, use mocking and stubbing and this ties the test very much to the implementation. For a run of the mill CRUD controller writing a controller spec is too much fanfare in my opinion. I just care about the interaction and the end result and these are handled fine on a higher level. Integration tests are not used to test many controllers, they are used to test the entire MVC stack and writing them for a single controller (a single user scenario actually) is a good thing. When I have the user scenario covered a plain controller spec is superfluous. I agree that they can be useful for unhappy paths. |
I've to admit that I've wrote this guideline because lately I was writing JSON API and it felt just right. About this need I always think about a rails app. Actually what I do is to collect all common functionalities (JSON format, security, access, errors) as shared examples, as @svs pointed out. The difference is that I do it to check the final result. But after all those comments I'm getting rather curious. Can you guys give me some real examples where something can't be done with integration tests, but it can be done with controller testing? Actually my experience with controller test wasn't nice for two main reasons.
The last one in particular, now is gone. |
You don't have to use mocks if you don't need them, it's related to all kind of tests.
Sorry I didn't mean exactly a few controllers, I meant a scenario. |
Your question sounds a bit confusing for me. As I think something can be done easy with controller test rather than integration test. For example: testing auth and rights, redirects, redirects depending on headers, sending special headers, before/after filters which can do "invisible" on page work. |
+1 to @route, @svs. I think the same. Let me add something. We shouldn't forget the important idea - "Divide and Conquer.". We should use integration specs for rough, surface testing. We have to pretend and behave yourself like and usual user, we should think and do the same things. We should use controllers specs to check details of realizations. This level of specs lets us get more concentration on implementation rather than integration's specs and lets us putty narrow places beyond integration's tests.. Let me note that view's specs may be very helpful too, in some specific cases of course. |
Small blog post about this topic: http://svs.io/post/32926616364/painless-controller-testing-in-rails |
+1 to svs's approach. Until recently I was very much in agree-ance with tilsammans, but messy controller specs are a symptom of a different problem with your testing, and more integration tests are not necessarily the solution. Give me a messy set of controller specs over a messy set of integration specs any day of the week please. If you don't TDD your controller code, then you're doing TDD bad, and you should feel bad. If you attempt to get coverage through all paths of your controller code using integration specs, then you're gonna end up with a lot of (slow) integration specs. Cue the math: http://www.jbrains.ca/permalink/integrated-tests-are-a-scam-part-1 Also, doing integration tests in Rails generally implies you're using a tool like capybara, but then where do you put your API tests? Not in your capy specs i hope... http://www.elabs.se/blog/34-capybara-and-testing-apis For better or worse, an API can be expressed really nicely in a controller spec. My approach now: BDD/TDD, but dont get attached to every single integration spec you write - keep only the high value ones and let your unit tests do the rest. |
Integration tests with cucumber/capybara usually tend to be much slower compared to good written unit tests... so alone for the quick health feedback they give me about my app I would not do without them... that seems to be an aspect that a lot of people seem to forget lately when they say they are just doing integration tests... |
We have to do integration tests anyway, either manually or automatically. We have to do it because it is the integrated product that is billed to the customer. Unit test are merely a convenience for the developers, albeit a very useful one. Manual integration tests are slower than automated integration tests. So as soon as the application becomes a bit complex it makes sense to have it highly covered with automated integration tests. If those are too slow for your workflow then it is probably a good idea to run them on some CI server. Even then some amount of manual testing is useful. Unit tests (models and controllers) are very useful to developers because they are faster to run than integration tests and because they encourage good design, but they are redundant. |
Disclaimer: I'm a newb, let me know if I'm wrong on any of these points. If following this advice how does the author intend on testing their API for apps that perform requests for JSON or XML? Also it would seem a lot faster testing CRUD operations by directly interacting with the controller since you're able to set user login states rather than having to hit your whole webapp. For instance, in a controller spec you could have session[:user_id] = FactoryGirl.create(:user).id
post :create, :world => FactoryGirl.attributes_for(:world, :brand_id => brand.id) Testing this in capybara would have to hit login section of your site, navigate to the page, then fill in each input of your form. This would turn a 200ms test to few seconds. I think integration tests have their place to ensure your webapp is generally working, but more rigorous tests on what input is acceptable seem to be much more efficient when done in controller specs both in testing speed and reducing code complexity. |
@sdeframond Exactly. Unit tests are less useful than integration/acceptance tests. They're great for models, but not for controllers, where they're too complicated and don't test what you really want tested. @svs:
I think you're wrong, as I'll explain below.
No. That's why I don't trust it to controller specs. What do I mean by that? Well, for a user-facing app (i.e. almost all of them), the API for my app is the UI. The only way to test the UI is through integration and acceptance tests. Controller tests are simply useless for this: a controller test can tell you that In other words, controller tests don't actually test what you care about. They don't test what you see. (I do occasionally write controller tests for controllers that manage HTTP service endpoints instead of Web pages, but even there I usually find acceptance tests more useful.)
Wrong. I get the advantages of TDD for my controllers by testing what I see through my integration tests. And my controllers aren't bloated or unfocused; it's rare that I have more than about 3 lines of code in any action.
I do want to test these things, but not directly, just as I don't want to test my private model methods directly. They are internal implementation, and as such are sufficiently tested by means of testing the interfaces that I do care about. Test what you see.
No. If the user cares about the flash message, it should be tested (and if not, don't bother testing it, but focus on whatever the user does care about). That's not unnecessary complexity; it's testing what's important to the user. And in my experience, controller tests are far more brittle than Cucumber acceptance stories.
That's beautiful but useless. You're testing that |
Also, one more thought: fast tests are not useful if they don't actually test what you need tested. |
I am not saying you don't need integration specs. I am saying that they are no substitute for controller specs (and model specs). People much smarter than I have said it much better than I ever can. http://blog.thecodewhisperer.com/2010/10/16/integrated-tests-are-a-scam/ I posit that it is impossible to meaningfully cover all your codepaths using integration specs. The complexity explodes combinatorially when you go from one model to two to four to forty. Add in some controllers, throw in some gems and a layer of javascript and you have several million possible cases to test. This is why we push the testing down to the lowest layer possible and at higher layers we only check increasingly high level functionality such as checking parameters passed or alerts received and so on. Do you, in your integration specs, check whether an item got added to the database? Take the case of checking for uniqueness of an email. It is possible to do this with an integration test. Then you want both name and email to be unique. And no wait, name should be unique only if the user is a premium users....Congratulations, you just spent 15 minutes setting up this test and a minute running it using capybara each and every time when it's so much simpler to do so at the model level. These tests are no way redundant. There is no other way to meaningfully test your app completely* in any sort of reasonable* timeframe.
Agreed. But UI tests are UI tests, router tests are router tests, view tests are view tests and integration tests are integration tests. They all have different purposes. Integration tests are called so because they are only to test that the various components of your application have been integrated correctly. Run the requests through the full stack of the app to test whether routers, controllers, models and frontend clients are all behaving well with each other. An integration test's purpose is not to test validation on models or basic correctness of business logic. Down that road lies madness.
The user cares not a whit whether you use Rails or PHP or have a team of pigeons randomly choosing answers. But the developers do care. Tests are the best form of documentation there is. When a unit test fails it gives an error message that can lead developers to solve the problem quickly. When an integration spec fails it just say "Expected page to have content 'success' but it does not". Great. Now go looking for the error somewhere else. In the end, of course, follow whatever testing methodology gives you best results. It's just that I can't see how one is to test every aspect of my system using only integration tests. Or are you suggesting that we let go of testing the whole system and concentrate only on the "important" bits? |
@svs:
And I'm saying you're wrong.
I have read that article. He doesn't back up his assertions, and doesn't
I'll look at that.
True. And the same is also true of unit specs, at least once you try
Generally not directly (I save that for model specs, which are technically
I do this sort of testing at the model level, and I'm not sure what gave I'm pressed for time right now and will attempt to respond to your other |
Just looked at the jbrains.ca link. It's interesting, but appears to be talking about a different type of use of integration tests, in a different programming context. I don't think it brings much useful (pro or con) to the present discussion—at least I don't see the relevancy to Rails-style tests. More later. |
It seems to me that relying only on integration testing is a bad practice. Integration tests always use views, what about features that do not have views, that are happening in the background?. If you are testing only what you see, it means that you do not test what you do not see. There are many such things such as e-mail scheduling, payment, data transfer, file manipulation, format conversion, etc... that can be overlooked in testing, if you believe that controller testing is not necessary. Controller testing and integration testing are a parts of application testing. It is not well if you cut a part of application and throw it away, saying "O! I don't care about it". Can be happen that you release the application with plenty of bugs, but you are not aware of it, because you didn't care about it. I agree with opinion, that in case of CRUD actions, integration testing is enough, even better in simply cases, but I disagree with opinion that integration testing is enough in any case. |
If they are features (for the user), there will always be some sort of output, be it HTML views, e-mail, JSON, or whatever. You test that output in your integration tests; it's not just views. If there is no output, then the user never sees them, so there is no direct value for him, and they're probably best either unit-tested in isolation or (probably better) integration tested as part of whatever they support that is of value for the user. You're complaining about an overly narrow type of integration testing. No one is actually advocating that, AFAIK. |
The rest of my response to @svs:
Mostly agreed. Validation of models lives in the model and can be easily verified by unit testing the model. To the extent that business logic lives in the model or other atomic object, same answer. I'm not disputing this point. I do unit-test my models quite extensively. That makes sense: they're "heavy" objects with lots of state and behavior. Properly skinny controllers are not (see more on this point below). However, if for some reason I could only have one kind of tests, I would trust my integration tests to show me that the application worked correctly as a whole. I would not trust my unit tests to do so, because they cannot.
But they should document interface, not implementation. When the developers need to know about implementation, they either inspect the code or write separate developer docs. They don't (or shouldn't) look at the tests as implementation documentation: that's very much not the point of tests. This is why refactoring works so well with tests: the tests specify the interface, and you can change the implementation as you like with the confidence that the interface hasn't been broken.
That has not been my experience at all. Even in quite complex applications, I can generally tell exactly where the error is from my integration test failures. When I can't, it has always been due to complex interactions between controllers and models that wouldn't have been caught by controller unit tests anyway! Perhaps your integration tests are written at a different level of granularity if you're having a problem in this respect?
I don't test every aspect of my system, I suppose—that's not feasible in a system of any complexity. Rather, I test every aspect of my system that the user cares about. If something has not been specified in a user story, then I believe its behavior is undefined, so I'm not going to write a test for it. (Before you accuse me of being slipshod here, note that part of my process of writing user stories is asking the user detailed questions about what the system should do in every case that the user or I can think of. Those answers get written into tests.) Another reason that I don't find much utility in writing Rails controller tests is pure practicality. I follow "skinny controller, fat model" as a matter of course, which means that my controller actions are simply brain-dead glue code. Anything complex in a controller action gets refactored into a model method. This means that my controller actions are typically no more complex than def index
@user = session[:current_user]
@posts = @user.posts.published
end There's nothing interesting to test here in isolation: if you try, you'll wind up mocking or stubbing so much that it isn't really a test. So you test I'd actually say—and I hadn't thought about it this way before—that if your controller methods benefit from unit tests, then that probably means your controllers are too fat. |
@marnen Where do you place (and test) access controls? Surely controllers (and their tests) are the right place to put the security boundary and certainly to test it. |
@josephlord What sort of access controls are you referring to? Authentication? Authorization? Something else? |
Actually, I guess I can expand on that before @josephlord replies. First of all, anything that affects the user's experience needs to be in an acceptance/integration test so that it's verified to be present where it's supposed to be. That's a given. Now, as far as authorization goes, I like putting as much of that as possible in the model (Cancan's approach is good here): surely the model (or a specific Authentication does probably belong more in the controller than authorization, but here again, I'm not sure the controller methods do much that's interesting to test in isolation. Does that answer your questions? |
@marnen Pretty much answers it. I would have the model knowing who can access specific objects but wouldn't expect it to check who the current user is on requests as I think that is a controller job. I also allow different parameters based on who the user is at times (e.g. setting password and password_confirmation are only allowed for the current user). These functions should be tested in the controller (particularly the unauthorized cases) as a malicious actor could bypass the HTML form tested by the integration testing and insert additional parameters. |
Exactly. The controller passes the user to the model method as a parameter.
So your authorization logic actually has something like class User
def can_change_password?(user)
if user == session[:current_user]
# ...
end
end
end ? That's a bit tough to know what to do with; I think I'd call it poor MVC myself, since the model normally shouldn't be touching the session (unless you have a model like Authlogic's class User
def can_change_password?(user)
user == self
end
end which is easier to test at the model level and keeps MVC layers more separate.
Controller testing won't actually help here, since it does not guarantee that you're testing the action that the malicious actor would be requesting: the malicious actor is not requesting a controller action, but rather a URL. So do this as an integration test by driving curl or something. The fundamental point is this: users request URLs, not controller actions, and therefore testing a controller action cannot, by itself, verify that the user will see what's in that action, because it does not test that the user ever gets to that action from a given URL. |
@marnen Actually my user controller (using Rails4/strong_parameters) has this logic:
The controllers will in other places ask the model if the user can take an action before doing it. The model doesn't ever know the current user and doesn't get passed to the model (except where that is really the subject of the operation).
Actually it is the other way round. Someone may find a way to access the controller by URLs that you don't expect (different format, different request type etc.) but they need to hit a controller at some point so that is the choke point for requests and the right place for tests to ensure safe parameters and correct access rights. You could test with Curl but that would be less complete (e.g. if Rails 5 adds a new request format), harder and slower.
I largely agree for testing the behaviour that you want to see but there is still value in testing the unexpected where you don't really care what the response looks like but you don't want it to succeed. |
I'd say it's unlikely, but not absolutely impossible. How do you test complex model logic?
Then clearly, they didn't know enough to make a useful decision: Cucumber steps are generally implemented in RSpec, so that would have been no problem.
Yes, if you have to only have one kind of test for some reason, that is generally the right kind, because it can test everything, whereas unit tests cannot. But unit tests are often useful anyway; it's impractical to set up an integration test for every permutation of complex model logic.
...which could explain the abysmal coverage numbers. Are they at least doing all new development test-first? |
"Test what you see" seems odd to me. In fact, users don't "see" your models either. Yet still you are not advocating that we only write acceptance tests? The most important problems with acceptance tests are:
Of course I write acceptance tests for every user story. But what about error handling? What about "my redis connection died and I cannot enqueue this job"? What about "the third-party API is down and I cannot process payments"? What about invalid or malicious user input, etc.? Testing all of these cases (which may well appear on multiple parts of the site) through the UI doesn't seem practical to me. If you say that there is no value in unit testing controllers you're basically saying that controllers are not meaningful units within your code (either that, or you're against unit testing in general). The controller's responsibility is a) to take input from the user and pass it to the app and b) to parse the response from the app and deal with it in a way that makes sense to a user (either by returning some JSON in an API or by setting up instance variables for a view). That's enough responsibility that a unit test makes sense at least in some cases. Also, on another note, "skinny controller, fat model" is an antipattern IMHO. The more reasonable approach is "skinny everything". And just because classes are skinny, doesn't mean there's no value in unit testing them. To each their own, I guess, but I see no a priori reason why unit testing controllers should considered to be bad. |
Why? It is the only way to know that your application works as expected for the user.
Acceptance tests are the most important tests, because they are the only tests that can show that the application is working as a whole. I write unit tests to make refactoring easier and to increase the number of execution paths I can test, not because I think they will tell me anything about whether the application as a whole works properly. Unit tests -- even controller unit tests -- cannot give you any assurance at all that the application works as desired for the user. Only acceptance tests can do that.
This right here is a big reason to have unit tests. The other big reason is that it's often more efficient to test at a smaller granularity before running acceptance tests. However, don't delude yourself into thinking that you can feasibly unit-test every path through your code either. That quickly runs into huge combinatorial explosions. C0 plus some common C1 use cases is usually as good as you're going to get.
They can actually be quite fast if you use a headless browser. But I'd rather my tests be correct than fast.
Acceptance tests are ideal for this. Utilities such as WebMock are great for simulating error conditions caused by external services.
You can use acceptance tests for that.
It's quite practical. I do it on a regular basis. Read my earlier posts in this thread for suggestions on how. Any test that simulates user input should involve the UI, I think.
I think you may have responded to this thread without actually reading it, because I already answered this above. Briefly put, while a controller may be a meaningful unit, in a proper Rails application, the controller should be nothing more than a tiny piece of brain-dead glue code. All the logic should be in models and service objects, so you can't meaningfully unit-test your controllers without mocking all their collaborators -- and at that point, you're testing your mocks, not your controllers. If your controllers have enough logic in them to benefit from unit tests, then you should refactor them. Am I against unit testing in general? No. But I don't think it's anywhere near as important as acceptance testing. I'd be willing to omit unit tests for simple models too, if I had suitable acceptance tests (though I don't tend to do this in practice).
Yes.
Not as you stated. Marshaling data for the view is the controller's responsibility, but the parsing and processing of user input is supposed to take place in the model layer.
And that's what's wrong with your controllers. You should create model methods for the processing. Then the controller consumes the results of those methods and passes them to the view.
"Fat model" as I interpret it means that the model layer is fat. Of course no one class should be fat. But almost nothing should happen in the controller layer. The controller just brainlessly passes data from model to view, while all the logic is in the model layer. That's what "skinny controller, fat model" really means, I believe. And that is proper Rails MVC, not an antipattern.
If a class does nothing but pass data back and forth between its collaborators, and contains no logic of its own, then there is nothing to unit test that's meaningful. And that is what a proper Rails controller should look like.
Software engineering ought to be based on demonstrable facts. I do not believe that "to each their own" is a phrase that has any place in this discipline.
Because it's unnecessary, it's painful to implement, it verifies the wrong things, and it's only useful if you're doing MVC wrong. Add those facts up and the conclusion is quite clear: it's a waste of time and effort. The time would be better spent refactoring logic into the model layer and writing acceptance tests. |
Relevant: a very interesting On 05/05/15 23:49, Marnen Laibow-Koser wrote:
|
@sdeframond: ooh, that's fascinating (though I wish they'd transcribed it -- I have more time to read than to watch video)! I'll note, however, that DHH apparently thinks hexagonal Rails is a bad thing, while I think it's a path to better use of Rails MVC. (In general I don't think much of DHH these days -- he did a great job with early Rails, but can't seem to see beyond it.) |
Something does not become a science simply by someone stating that it is. If you want demonstrable facts, please cite extensive peer-reviewed studies, not just personal anecdotes with thousands of possible confounders. Failing that, I suppose that yes, it is indeed a matter of opinion. I think we have different ways of writing both applications and tests in general. Most importantly, I do not write tests to verify correctness. In fact, in almost all cases I am much quicker verifying correct behaviour by hand. The reason I test is to catch regressions, and for that purpose unit tests are as valuable as integration tests. Of course unit tests can be brittle, but so can integration tests ("whoops, I changed the post login message, now I have to rewrite my tests"), so it's all a question of writing good tests anyway.
I disagree. My models should not have to know that they are part of a Rails app (it's bad enough that they're tightly coupled to ActiveRecord, i.e. to the database). I should be able to take my models and reuse them somewhere else; that's not just theory, I've done that before. The controller on the other hand is the boundary, the entity that translates web requests into business requests and business responses into web responses (including setting up instance variables for the view, rendering a JSON representation, etc). IMHO, this is similar to the approach discussed by Uncle Bob in Architecture: The Lost Years. |
...which I didn't do. I merely said that we should take an evidence-based approach to software engineering, not that it was a science.
What do you write tests for, then? (Yes, I see you said regressions, but is that the only purpose?) And what do you mean by "correctness" here? (I can think of at least two meanings that make sense in this context.) You're right that it's generally quicker to run a one-off test by hand than to automate it. However, refactoring and other maintenance activities require repetition of the same tests over and over, to make sure that nothing breaks during the process. And that's where automation comes in. Also, think about this: the way you make sure your application is correct is by catching regressions. And catching regressions has no purpose except to ensure your application is correct. So what do you mean when you say you don't write tests to ensure that the application is correct? What would that kind of test be, to you?
Agreed. That is completely orthogonal to the idea that the model layer should be where data processing happens. The model can process data received from the controller without knowing where it came from or where it's going -- and it should. The controller should call model methods and consume the return values. The service objects, OTOH (which are part of the model layer but somewhat different) generally probably do know that they're part of a Rails app, and have less reusability.
That is how Reenskaug-style MVC is designed, as used in Smalltalk and in Cocoa. It is emphatically not how Rails MVC (originally called Model2 MVC, from the name of an earlier implementation of this pattern) is intended to work. There is more than one MVC architecture paradigm out there, and the term is thus a bit overloaded. Reenskaug MVC doesn't work very well for server-side Web applications, or so I understand. It's unfortunate that the related but different paradigm for Web applications is called by the same name even though it's not the same thing. The controller in Reenskaug MVC is an intelligent object, responsible for quite a lot of stuff (and thus it makes sense to unit-test it). The controller in Rails MVC should be as dumb as you can possibly make it, simply delegating to intelligent classes in the model layer. Right now it sounds like you're trying to do Reenskaug MVC in a framework that is designed for a different pattern altogether. |
By the way, does any of you know how to do proper integration tests with I haven't found better than to test that 1) the job was correctly What do you think? -Sam On 06/05/15 18:39, Marnen Laibow-Koser wrote:
|
While there are options to inline sidekiq so the jobs are run directly (see the docs), I think it's not a particulary good idea. Basically there are several (not exclusive, but complementary strategies):
The last option is very expensive (and sometimes not feasible, e.g. if you have a delay for the worker), so IMHO it should be restricted to very important cases. |
That's a really interesting question. I'm trying to remember how I've handled this in the past. I think I would tend to have the job run synchronously in the testing environment; otherwise there's really no way to follow the whole process through. Or (mostly equivalent) somehow get a promise from the enqueued job that resolves when it's done, and test that. It seems to me that this is the approach I've taken in the past: run synchronously if Another approach would be to treat the queue like an external service: mock the results of the enqueued job, then test the job logic elsewhere. I'm not sure which of these I like better. They all have problems. I'd tend to decide on a case-by-case basis. |
Considering how contentious this rule is; I really think it should be removed from the guide as a recommendation. |
@tgaff The principle isn't controversial. You'll notice that most of the disagreement comes from people who are new to RSpec, not from experienced RSpec users. Besides, if it were obvious, there would be no need to recommend it, would there? |
I don't want to add to this discussion again, but to state that this principle is not controversial is ... quite a statement. I also just watched a screencast by Gary Bernhard yesterday, where he refactored a big controller test - in the end the controller spec was smaller and more isolated but it was still kept in place. I'll furthermore refer to this blog post. |
Testing a controller can fall into multiple categories!
If you do have to test the ins and outs of the controller, you're back to case 1, where you're defining a contract. Once again: You should unit test, test business requirements (commands), acceptance test, and then integration test -- in that order, because that's the order in which things are liable to change. |
@ryanmtaylor Just answer me this question: what is a controller's responsibility? Because before you answer that question you can't answer whether and how controllers should be tested. My opinion is that Rails controllers by design / common usage do a really, really poor job at adhering to the SRP. Are they supposed to cover business logic? Parameter handling? Directing routes to specific entrypoints? |
IMO this guideline should be updated.
It should probably say RSpec and/or Capybara. I don't think feature specs with Capybara can replace controller specs and/or request specs in most cases. Even if they could, it would result in a slow and likely brittle a test suite. For instance, Capybara is not suited for testing APIs. You should probably mention request specs, which are suited for API testing and widely used in integration tests as of Rails 5. The RSpec 3.5 release notes say:
|
Explicitly not. That's the model's job, and always has been as far as Rails MVC is concerned. This is nothing new.
That's how I strive to structure my controllers (and usually succeed). Anything complex goes into a model or a service object (which is really a special kind of model).
You may be right, but "skinny controller, fat model" has long been the Rails ideal. Therefore, if you're doing Rails right, controller unit tests are generally not meaningful. If you need controller tests, you're probably doing Rails wrong by making your controllers too fat.
...which is why I hate strong parameters. That kind of logic belongs in the model layer. (As I've said elsewhere in this thread, I basically proxy my strong parameter logic to the model layer in my applications. That way I work with, rather than against, the framework, but still keep the data validation logic in my models.)
They can (especially with Cucumber on top) and should, if the controllers are sufficiently skinny. (If they're not sufficiently skinny, that's a problem to fix, not to palliate with more tests.)
For API testing, I very highly recommend the https://github.com/zipmark/rspec_api_documentation gem. |
I'm referring to applications with extremely skinny controllers. Let me rephrase: Feature specs with Capybara may replace controller/request specs, but it will result in a slower test suite. Additionally, finding the problems behind test failures should take longer (lower-level tests help to pinpoint issues) and the fact that your tests rely on DOM elements will likely increase brittleness.
I did not know that gem. Looks great, thank you for the recommendation. |
So what? The tests will be more correct, especially if the controllers are skinny.
That's why we have model unit tests. If the controllers are sufficiently skinny, there is essentially nothing in them that will break, so there is nothing in them to test.
The fact that the tests rely on DOM elements increases correctness. I want my tests to break when the DOM changes significantly, because the DOM is what the user interacts with. |
IMO you can't enjoy the full benefits of the TDD process with a slow feedback loop. Also, if tests take too long to run, it is easier to loose focus. I believe tests can be correct and fast at the same time.
The DOM may not have to change significantly to break your tests. Anyway, I suggest we agree to disagree, as this seems like one of those discussions that can go on forever. |
That's true to a point, which is why we use the model unit tests as the inner loop. Then we use the feature/acceptance tests as an outer loop if they're too slow to run all the time. Personally, though, I am so convinced of the benefits of test-first development that I'm willing to be pretty patient for slightly slower but correct tests to run.
I use Guard and other tools that help track that focus for me.
That's the ideal, but if I have to choose one of the two, I will choose correct over fast nearly every time—within reason: I don't recompile the Ruby interpreter or reinstall my gems for every test, after all. :)
I want my tests to break if the DOM changes in such a way that the user would have a different experience. If they do not do so, then the tests are not sufficiently exercising the UI. OTOH, if the tests break frequently when the DOM changes in trivial ways, then the tests are too picky. But it's easy to write tests that work through the UI and are just picky enough. The way to do that is to write tests that think like a user: look for display text, not classes or IDs, don't pay too much attention to element order if the user wouldn't, and so on.
I don't do that; I think it's sloppy. This is a field where facts are verifiable, which means that if we disagree, then probably at least one (more likely both) of us has more to learn. |
The fact that facts are verifiable in this field does not stop some opinion-based discussions from going on "forever", such as: a) The mockists vs. classicists discussion; b) Extracting model code to concerns vs. service objects in Rails (DHH defends the former); c) Which programming language is best suited for a specific task. I believe our attachment to specific points of view and the fact that most people don't like being proven wrong (even though it provides a great opportunity for learning) tends to skew our perception/interpretation of verifiable facts. I believe they call it "confirmation bias". Anyway, that's a whole other (philosophical) discussion. I agree that, as long as we enter this kind of technical "debate" with an open mind, we can learn a lot. I sure learned from our discussion, as I'm relatively new to testing and striving to learn as much as I can about the "best practices". In that spirit, thanks for a good talk :) |
@marnen: I had given some thought to the points discussed in the previous comments and tried out your approach while developing a small Rails app over the last few weeks. I did not write any controller tests for web views. I have changed my mind and now agree that feature specs can replace controller specs in web apps if the controllers are skinny enough (so if a feature spec fails due to a controller problem, the cause would be quick to pinpoint). After all, everything that we may test in a controller spec is also tested in a well-written feature spec. Although feature specs are much slower than controller specs, our apps need them for UI testing anyway, so the slowness is inevitable. |
That's basically my view. I'm glad it worked out in practice for you! |
Not sure if it's just me, but the link provided now resolves to a porn site, |
@aesyondu Yikes! Looks like Piotr Solnica's site is now at http://solnic.codes, for what it's worth. |
Write your thoughts about the "test what you see" best practice.
The text was updated successfully, but these errors were encountered: