Unittest tests order
How do I be sure of the unittest methods order? Is the alphabetical or numeric prefixes the proper way?
class TestFoo(TestCase):
def test_1(self):
...
def test_2(self):
...
or
class TestFoo(TestCase):
def test_a(self):
...
def test_b(self):
...
python unit-testing
add a comment |
How do I be sure of the unittest methods order? Is the alphabetical or numeric prefixes the proper way?
class TestFoo(TestCase):
def test_1(self):
...
def test_2(self):
...
or
class TestFoo(TestCase):
def test_a(self):
...
def test_b(self):
...
python unit-testing
possible duplicate of changing order of unit tests in Python
– S.Lott
Nov 4 '10 at 10:25
1
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings. docs.python.org/library/unittest.html
– morsik
Aug 26 '15 at 13:48
add a comment |
How do I be sure of the unittest methods order? Is the alphabetical or numeric prefixes the proper way?
class TestFoo(TestCase):
def test_1(self):
...
def test_2(self):
...
or
class TestFoo(TestCase):
def test_a(self):
...
def test_b(self):
...
python unit-testing
How do I be sure of the unittest methods order? Is the alphabetical or numeric prefixes the proper way?
class TestFoo(TestCase):
def test_1(self):
...
def test_2(self):
...
or
class TestFoo(TestCase):
def test_a(self):
...
def test_b(self):
...
python unit-testing
python unit-testing
edited Jun 27 '12 at 14:11
0xc0de
4,58523363
4,58523363
asked Nov 4 '10 at 9:32
nmb.ten
94311015
94311015
possible duplicate of changing order of unit tests in Python
– S.Lott
Nov 4 '10 at 10:25
1
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings. docs.python.org/library/unittest.html
– morsik
Aug 26 '15 at 13:48
add a comment |
possible duplicate of changing order of unit tests in Python
– S.Lott
Nov 4 '10 at 10:25
1
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings. docs.python.org/library/unittest.html
– morsik
Aug 26 '15 at 13:48
possible duplicate of changing order of unit tests in Python
– S.Lott
Nov 4 '10 at 10:25
possible duplicate of changing order of unit tests in Python
– S.Lott
Nov 4 '10 at 10:25
1
1
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings. docs.python.org/library/unittest.html
– morsik
Aug 26 '15 at 13:48
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings. docs.python.org/library/unittest.html
– morsik
Aug 26 '15 at 13:48
add a comment |
16 Answers
16
active
oldest
votes
You can disable it by setting sortTestMethodsUsing to None:
http://docs.python.org/2/library/unittest.html#unittest.TestLoader.sortTestMethodsUsing
For pure unittests, you folks are right; but for component tests and integration tests...
I do not agree that you shall assume nothing about the state.
What if you are testing the state.
For example, your test validates that a service is auto-started upon installation. If in your setup, you start the service, then do the assertion, then you are no longer testing the state but you are testing the "service start" functionality.
Another example is when your setup takes a long time or requires a lot of space and it just becomes impractical to run the setup frequently.
Many developers tend to use "unittest" frameworks for component testing...so stop and ask yourself, am I doing unittesting or component testing.
15
+1 for this: "What if you are testing the state". Happens quite often when testing methods talking to a DB backend for instance. Do not be dogmatic, there are legitimate exceptions to the otherwise sensible rule of making each unit test isolated.
– Laryx Decidua
Jul 22 '14 at 9:39
Even if you set desired order it could fail in more 'intelligent' environment. My IDE can skip test which were successful earlier and the code isn't changed. Making test dependent of each other is asking for troubles.
– Maciej Wawrzyńczuk
Nov 26 '18 at 15:33
add a comment |
There is no reason given that you can't build on what was done in a previous test or should rebuild it all from scratch for the next test. At least no reason is usually offered but instead people just confidently say "you shouldn't". That isn't helpful.
In general I am tired of reading too many answers here that say basically "you shouldn't do that" instead of giving any information on how to best do it if in the questioners judgment there is good reason to do so. If I wanted someone's opinion on whether I should do something then I would have asked for opinions on whether doing it is a good idea.
That out of the way, if you read say loadTestsFromTestCase and what it calls it ultimately scans for methods with some name pattern in whatever order they are encountered in the classes method dictionary, so basically in key order. It take this information and makes a testsuite of mapping it to the TestCase class. Giving it instead a list ordered as you would like is one way to do this. I am not so sure of the most efficient/cleanest way to do it but this does work.
3
I agree with your remarks about unhelpful "don't do that" comments without explanations, but having said that there are genuine reasons why it's not a good idea to have dependencies between tests. Chief among them is it is nice to have tests fail because a particular thing has broken and not because there's some unclear, undocumented link between the test you're running and some other test which you're not. If you never run isolated tests then that's fine, but being able to run individual tests is helpful in some circumstances, and this is not possible where they depend on each other.
– JimmidyJoo
Mar 10 '15 at 15:24
The answer is that the unit tests should be independent of each other so that you can run and debug them in isolation.
– JeremyP
Sep 10 '15 at 12:46
11
Unit tests should be independent, true. Or better said, they should be able to be run independently for many good reasons. But, I write functional tests, integration tests, and system tests with the unittest framework as well, and these would be unfeasible to run without ordering them since system state MATTERS in integration tests!
– Rob Hunter
Apr 12 '16 at 18:04
Can you provide an example of how to set the test execution order?
– Steven M. Vascellaro
Nov 6 '17 at 19:08
add a comment |
Why do you need specific test order? The tests should be isolated and therefore it should be possible to run them in any order, or even in parallel.
If you need to test something like user unsubscribing, the test could create a fresh database with a test subscription and then try to unsubscribe. This scenario has its own problems, but in the end it’s better than having tests depend on each other. (Note that you can factor out common test code, so that you don’t have to repeat the DB setup code or create testing data ad nauseam.)
2
It might be difficult to run them in parallel if they access a database (which is mostly the case with django)
– Antoine Pelisse
Nov 4 '10 at 9:38
17
Each test is the continuation of the previous. Here is simple example of tests order. testing user subscribing, testing disabling of the subscribing, testing unsubscribing of the subscribed and disabled subscription. I must to do all the things tested in the previous test again if tests are not ordered. Is it wrong way?
– nmb.ten
Nov 4 '10 at 9:51
2
@MitchellModel Django uses transactions to roll back changes to the database between tests. Your second test should not see the modifications to the database created in the first test. (If you are, your view is probably using transactions - you should be using Django's TransactionTestCase instead of TestCase for that view)
– Izkata
Nov 28 '11 at 19:50
5
One reason I can think of is when two tests don't depend on one another, but the components they are testing do. Imagine testing a class B which is a subclass of A. If A has issues, it will fail B tests too. It would be nicer to get errors related to A test first. But overall, it shouldn't make a big difference really.
– Mansour
Feb 6 '12 at 17:02
5
For debugging, it makes lots of sense to have the (independent) tests ordered from simple to complex.
– Michael Clerx
Oct 22 '12 at 11:27
|
show 10 more comments
If you use 'nose' and you write your test cases as functions (and not as methods of some TestCase derived class) 'nose' doesn't fiddle with the order, but uses the order of the functions as defined in the file. In order to have the assert_* methods handy without needing to subclass TestCase I usually use the testing module from numpy. Example:
from numpy.testing import *
def test_aaa():
assert_equal(1, 1)
def test_zzz():
assert_equal(1, 1)
def test_bbb():
assert_equal(1, 1)
Running that with ''nosetest -vv'' gives:
test_it.test_aaa ... ok
test_it.test_zzz ... ok
test_it.test_bbb ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.050s
OK
Note to all those who contend that unit tests shouldn't be ordered: while it is true that unit tests should be isolated and can run independently, your functions and classes are usually not independent. They rather build up on another from simpler/low-level functions to more complex/high-level functions. When you start optimising your low-level functions and mess up (for my part, I do that frequently; if you don't, you probably don't need unit test anyway;-) then it's a lot better for diagnosing the cause, when the tests for simple functions come first, and tests for functions that depend on those functions later. If the tests are sorted alphabetically the real cause usually gets drowned among one hundred failed assertions, which are not there because the function under test has a bug, but because the low-level function it relies on has.
That's why I want to have my unit tests sorted the way I specified them: not to use state that was built up in early tests in later tests, but as a very helpful tool in diagnosing problems.
1
I have a suite of a few hundred test cases and I sadly can't say that's true. It's not avoided on purpose either, sometimes it really was in this order. Also I'm not sure if it's configurable in nose somewhere, but scrolling through the help I can't make out the option either.
– erikbwork
Mar 26 '15 at 10:16
Your example works, but this doesn't work in my case as the tests are still executed alphabetically, but reading through the other answers, i realized that i have to isolate my tests properly
– danidee
Dec 12 '16 at 10:21
add a comment |
Don't rely on the order. If they use some common state like the filesystem or database, then you should create setUp
and tearDown
methods that get your environment into a testable state, then clean up after the tests have run. Each test should assume that the environment is as defined in setUp
, and should make no further assumptions.
add a comment |
I half agree with the idea that tests souldn't be ordered. In some cases it helps (it's easier damn it!) to have them in order... after all that's the reason for the 'unit' in UnitTest.
That said one alternative is to use mock objects to mockout and patch the items that should run before that specific code under test. You can also put a dummy function in there to monkey patch your code. For more info check out Mock, which is part of the standard library now.
Mock
Here are some YouTube videos if you haven't used Mock before.
Video 1
Video 2
Video 3
More to the point, try using class methods to structure your code, then place all the class methods in one main test method.
import unittest
import sqlite3
class MyOrderedTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.create_db()
cls.setup_draft()
cls.draft_one()
cls.draft_two()
cls.draft_three()
@classmethod
def create_db(cls):
cls.conn = sqlite3.connect(":memory:")
@classmethod
def setup_draft(cls):
cls.conn.execute("CREATE TABLE players ('draftid' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'first', 'last')")
@classmethod
def draft_one(cls):
player = ("Hakeem", "Olajuwon")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
@classmethod
def draft_two(cls):
player = ("Sam", "Bowie")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
@classmethod
def draft_three(cls):
player = ("Michael", "Jordan")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
def test_unordered_one(self):
cur = self.conn.execute("SELECT * from players")
draft = [(1, u'Hakeem', u'Olajuwon'), (2, u'Sam', u'Bowie'), (3, u'Michael', u'Jordan')]
query = cur.fetchall()
print query
self.assertListEqual(query, draft)
def test_unordered_two(self):
cur = self.conn.execute("SELECT first, last FROM players WHERE draftid=3")
result = cur.fetchone()
third = " ".join(result)
print third
self.assertEqual(third, "Michael Jordan")
add a comment |
There are a number of reasons for prioritizing tests, not the least of which is productivity, which is what JUnit Max is geared for. It's sometimes helpful to keep very slow tests in their own module so that you can get quick feedback from the those tests that that don't suffer from the same heavy dependencies. Ordering is also helpful in tracking down failures from tests that are not completely self-contained.
Completely agree.
– Purrell
May 13 '11 at 21:46
3
Sorry, but I tend to disagree. Unit tests shouldn't depend on each other, but it still often makes a lot of sense if they are executed in the order they were specified. Say, you have two functionsa
andb
andb
usesa
. Then it is much better iftest_a
is executed beforetest_b
, because ifa
contains an error you will spot that much earlier this way, instead of trying to find the bug inb
.
– Elmar Zander
Mar 13 '12 at 16:30
@ElmarZander - iftest_b
runs alsoa
, then you might have a problem with your test structure, astest_b
will end up testing not a single unitb
but two:b
anda
. You should probably mock the result ofa
in yourtest_b
instead. Unit tests ≠ integration tests.
– mac
Dec 23 '13 at 13:29
@mac Thanks, but I do know what integration tests are. What I wrote had nothing to with that, and no, I don't have a problem with my test structure. I'm just adhering to some structured design approach, where I compose more complex functions from simpler ones, and it would neither make sense nor be possible to mock every five-line-function by another five-line-function, but it does make a lot of sense to test the simpler ones before the more complex function built on top.
– Elmar Zander
Oct 16 '15 at 16:20
add a comment |
Ok, may be a bit later, but anyway...
You should try proboscis library. It will allow you to make tests order as well as set up any test dependencies. I use it and this library is awesome truly.
For example, if test case #1
from module A
should depend on test case #3
from module B
you CAN set this behaviour using the library.
add a comment |
http://docs.python.org/library/unittest.html
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings.
If you need set order explicitly, use a monolithic test.
class Monolithic(TestCase):
def step1(self):
...
def step2(self):
...
def steps(self):
for name in sorted(dir(self)):
if name.startswith("step"):
yield name, getattr(self, name)
def test_steps(self):
for name, step in self.steps():
try:
step()
except Exception as e:
self.fail("{} failed ({}: {})".format(step, type(e), e)
Check out post for details.
add a comment |
There are scenarios where the order can be important and where setUp and Teardown come in as to limited. There's only one setUp and tearDown method, which is logical but you can only put so much information in them until it get's unclear what setUp or tearDown might actually be doing.
Take this integration test as an example:
You are writing tests to see if the registration form and the login form are working correctly. In such a case the order is important, as you can't login without an existing account.
More importantly the order of your tests represents some kind of user interaction. Where each test might represent a step in the whole process or flow you're testing.
Dividing your code in those logical pieces has several advantages.
Might not be the best solution but, I often use one method that kicks off the actual tests.
def test_registration_login_flow(self):
_test_registration_flow()
_test_login_flow()
add a comment |
A simple method for ordering "unittest" tests is to follow the init.d mechanism of giving them numeric names:
def test_00_createEmptyObject(self):
obj = MyObject()
self.assertIsEqual(obj.property1, 0)
self.assertIsEqual(obj.dict1, {})
def test_01_createObject(self):
obj = MyObject(property1="hello", dict1={"pizza":"pepperoni"})
self.assertIsEqual(obj.property1, "hello")
self.assertIsDictEqual(obj.dict1, {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = MyObject(property1="world")
obj.reverseProperty1()
self.assertIsEqual(obj.property1, "dlrow")
However, in such cases, you might want to consider structuring your tests differently so that you can build on previous construction cases. For instance, in the above, it might make sense to have a "construct and veirfy" function that constructs the object and validates it's assignment of parameters.
def make_myobject(self, property1, dict1): # Must be specified by caller
obj = MyObject(property1=property1, dict1=dict1)
if property1:
self.assertEqual(obj.property1, property1)
else:
self.assertEqual(obj.property1, 0)
if dict1:
self.assertDictEqual(obj.dict1, dict1)
else:
self.assertEqual(obj.dict1, {})
return obj
def test_00_createEmptyObject(self):
obj = self.make_object(None, None)
def test_01_createObject(self):
obj = self.make_object("hello", {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = self.make_object("world", None)
obj.reverseProperty()
self.assertEqual(obj.property1, "dlrow")
add a comment |
I agree with the statement that a blanket "don't do that" answer is a bad response.
I have a similar situation where I have a single data source and one test will wipe the data set causing other tests to fail.
My solution was to use the operating system environment variables in my Bamboo server...
(1) The test for the "data purge" functionality starts with a while loop that checks the state of an environment variable "BLOCK_DATA_PURGE." If the "BLOCK_DATA_PURGE" variable is greater than zero, the loop will write a log entry to the effect that it is sleeping 1 second. Once the "BLOCK_DATA_PURGE" has a zero value, execution proceeds to test the purge functionality.
(2) Any unit test which needs the data in the table simply increments "BLOCK_DATA_PURGE" at the beginning (in setup()) and decrements the same variable in teardown().
The effect of this is to allow various data consumers to block the purge functionality so long as they need without fear that the purge could execute in between tests. Effectively the purge operation is pushed to the last step...or at least the last step that requires the original data set.
Today I am going to extend this to add more functionality to allow some tests to REQUIRE_DATA_PURGE. These will effectively invert the above process to ensure that those tests only execute after the data purge to test data restoration.
Kudos for the first paragraph. "Don't do that" always comes from some kind of inexperience, i.e. a developer who has never had to automate integration or user acceptance testing.
– pmneve
Dec 12 '17 at 17:50
add a comment |
See the example of WidgetTestCase
on http://docs.python.org/library/unittest.html#organizing-test-code , it says that
Class instances will now each run one of the test_*() methods, with self.widget created and destroyed separately for each instance.
So it might be of no use to specify the order of test cases, if you do not access global variables.
add a comment |
I have implemented a plugin nosedep for Nose which adds support for test dependencies and test prioritization.
As mentioned in the other answers/comments this is often a bad idea, however there can be exceptions where you would want to do this (in my case it was performance for integration tests - with a huge overhead for getting into a testable state, minutes vs hours).
A minimal example is:
def test_a:
pass
@depends(before=test_a)
def test_b:
pass
To ensure that test_b
is always run before test_a
.
add a comment |
The philosophy behind unit tests is to make them independent of each other. This means that the first step of each test will always be to try to rethink how you are testing each piece to match that philosophy. This can involve changing how you approach testing and being creative by narrowing your tests to smaller scopes.
However, if you still find that you need tests in a specific order (as that is viable), you could try checking out the answer to Python unittest.TestCase execution order .
add a comment |
Contrary to what was said here:
- tests have to run in isolation (order must not matters for that)
AND
- ordering them is important because they describe what the system do and how the developper implements it.
IOW, each test brings you informations of the system and the developper logic.
So if these informations are not ordered that can make your code difficult to understand.
There are some scenarios where tests need to run in a specific order. For example: I have an API wrapper which logs into an external server. The login needs to happen before any other unittest.
– Steven M. Vascellaro
Nov 6 '17 at 19:11
@StevenVascellaro Tests you're describing are not unit tests, the topic is about unit testing, and you NEVER have scenario where tests need to run in a specific order. It's a code smell about bad design or wrong tests. If you write such kind of tests, you'd better review what you know about testing, because to me the tests you're describing are useless and make the code hard to change. Think about it, you're talking to test an external system, which should already be tested. Focus on YOUR system that's what you're testing.
– gregorySalvan
Nov 8 '17 at 20:55
1
The topic is not about unit testing but about using unittest to automate integration testing. As a QA Test Automation engineer, my job is to do integration and browser-based 'end-to-end' testing. Unit testing (whatever the tool is) is in the domain of the developer doing test first development or doing their best to turn over clean code to the next step in testing.
– pmneve
Dec 12 '17 at 17:45
:)))) OK just this "QA Test Automation engineer" is enough to be sure that we'll never be able to discuss together, and that your code is a total mess for me. Sorry
– gregorySalvan
Dec 28 '17 at 20:37
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f4095319%2funittest-tests-order%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
16 Answers
16
active
oldest
votes
16 Answers
16
active
oldest
votes
active
oldest
votes
active
oldest
votes
You can disable it by setting sortTestMethodsUsing to None:
http://docs.python.org/2/library/unittest.html#unittest.TestLoader.sortTestMethodsUsing
For pure unittests, you folks are right; but for component tests and integration tests...
I do not agree that you shall assume nothing about the state.
What if you are testing the state.
For example, your test validates that a service is auto-started upon installation. If in your setup, you start the service, then do the assertion, then you are no longer testing the state but you are testing the "service start" functionality.
Another example is when your setup takes a long time or requires a lot of space and it just becomes impractical to run the setup frequently.
Many developers tend to use "unittest" frameworks for component testing...so stop and ask yourself, am I doing unittesting or component testing.
15
+1 for this: "What if you are testing the state". Happens quite often when testing methods talking to a DB backend for instance. Do not be dogmatic, there are legitimate exceptions to the otherwise sensible rule of making each unit test isolated.
– Laryx Decidua
Jul 22 '14 at 9:39
Even if you set desired order it could fail in more 'intelligent' environment. My IDE can skip test which were successful earlier and the code isn't changed. Making test dependent of each other is asking for troubles.
– Maciej Wawrzyńczuk
Nov 26 '18 at 15:33
add a comment |
You can disable it by setting sortTestMethodsUsing to None:
http://docs.python.org/2/library/unittest.html#unittest.TestLoader.sortTestMethodsUsing
For pure unittests, you folks are right; but for component tests and integration tests...
I do not agree that you shall assume nothing about the state.
What if you are testing the state.
For example, your test validates that a service is auto-started upon installation. If in your setup, you start the service, then do the assertion, then you are no longer testing the state but you are testing the "service start" functionality.
Another example is when your setup takes a long time or requires a lot of space and it just becomes impractical to run the setup frequently.
Many developers tend to use "unittest" frameworks for component testing...so stop and ask yourself, am I doing unittesting or component testing.
15
+1 for this: "What if you are testing the state". Happens quite often when testing methods talking to a DB backend for instance. Do not be dogmatic, there are legitimate exceptions to the otherwise sensible rule of making each unit test isolated.
– Laryx Decidua
Jul 22 '14 at 9:39
Even if you set desired order it could fail in more 'intelligent' environment. My IDE can skip test which were successful earlier and the code isn't changed. Making test dependent of each other is asking for troubles.
– Maciej Wawrzyńczuk
Nov 26 '18 at 15:33
add a comment |
You can disable it by setting sortTestMethodsUsing to None:
http://docs.python.org/2/library/unittest.html#unittest.TestLoader.sortTestMethodsUsing
For pure unittests, you folks are right; but for component tests and integration tests...
I do not agree that you shall assume nothing about the state.
What if you are testing the state.
For example, your test validates that a service is auto-started upon installation. If in your setup, you start the service, then do the assertion, then you are no longer testing the state but you are testing the "service start" functionality.
Another example is when your setup takes a long time or requires a lot of space and it just becomes impractical to run the setup frequently.
Many developers tend to use "unittest" frameworks for component testing...so stop and ask yourself, am I doing unittesting or component testing.
You can disable it by setting sortTestMethodsUsing to None:
http://docs.python.org/2/library/unittest.html#unittest.TestLoader.sortTestMethodsUsing
For pure unittests, you folks are right; but for component tests and integration tests...
I do not agree that you shall assume nothing about the state.
What if you are testing the state.
For example, your test validates that a service is auto-started upon installation. If in your setup, you start the service, then do the assertion, then you are no longer testing the state but you are testing the "service start" functionality.
Another example is when your setup takes a long time or requires a lot of space and it just becomes impractical to run the setup frequently.
Many developers tend to use "unittest" frameworks for component testing...so stop and ask yourself, am I doing unittesting or component testing.
edited Mar 11 '14 at 6:24
answered Mar 11 '14 at 6:15
max
4,00265083
4,00265083
15
+1 for this: "What if you are testing the state". Happens quite often when testing methods talking to a DB backend for instance. Do not be dogmatic, there are legitimate exceptions to the otherwise sensible rule of making each unit test isolated.
– Laryx Decidua
Jul 22 '14 at 9:39
Even if you set desired order it could fail in more 'intelligent' environment. My IDE can skip test which were successful earlier and the code isn't changed. Making test dependent of each other is asking for troubles.
– Maciej Wawrzyńczuk
Nov 26 '18 at 15:33
add a comment |
15
+1 for this: "What if you are testing the state". Happens quite often when testing methods talking to a DB backend for instance. Do not be dogmatic, there are legitimate exceptions to the otherwise sensible rule of making each unit test isolated.
– Laryx Decidua
Jul 22 '14 at 9:39
Even if you set desired order it could fail in more 'intelligent' environment. My IDE can skip test which were successful earlier and the code isn't changed. Making test dependent of each other is asking for troubles.
– Maciej Wawrzyńczuk
Nov 26 '18 at 15:33
15
15
+1 for this: "What if you are testing the state". Happens quite often when testing methods talking to a DB backend for instance. Do not be dogmatic, there are legitimate exceptions to the otherwise sensible rule of making each unit test isolated.
– Laryx Decidua
Jul 22 '14 at 9:39
+1 for this: "What if you are testing the state". Happens quite often when testing methods talking to a DB backend for instance. Do not be dogmatic, there are legitimate exceptions to the otherwise sensible rule of making each unit test isolated.
– Laryx Decidua
Jul 22 '14 at 9:39
Even if you set desired order it could fail in more 'intelligent' environment. My IDE can skip test which were successful earlier and the code isn't changed. Making test dependent of each other is asking for troubles.
– Maciej Wawrzyńczuk
Nov 26 '18 at 15:33
Even if you set desired order it could fail in more 'intelligent' environment. My IDE can skip test which were successful earlier and the code isn't changed. Making test dependent of each other is asking for troubles.
– Maciej Wawrzyńczuk
Nov 26 '18 at 15:33
add a comment |
There is no reason given that you can't build on what was done in a previous test or should rebuild it all from scratch for the next test. At least no reason is usually offered but instead people just confidently say "you shouldn't". That isn't helpful.
In general I am tired of reading too many answers here that say basically "you shouldn't do that" instead of giving any information on how to best do it if in the questioners judgment there is good reason to do so. If I wanted someone's opinion on whether I should do something then I would have asked for opinions on whether doing it is a good idea.
That out of the way, if you read say loadTestsFromTestCase and what it calls it ultimately scans for methods with some name pattern in whatever order they are encountered in the classes method dictionary, so basically in key order. It take this information and makes a testsuite of mapping it to the TestCase class. Giving it instead a list ordered as you would like is one way to do this. I am not so sure of the most efficient/cleanest way to do it but this does work.
3
I agree with your remarks about unhelpful "don't do that" comments without explanations, but having said that there are genuine reasons why it's not a good idea to have dependencies between tests. Chief among them is it is nice to have tests fail because a particular thing has broken and not because there's some unclear, undocumented link between the test you're running and some other test which you're not. If you never run isolated tests then that's fine, but being able to run individual tests is helpful in some circumstances, and this is not possible where they depend on each other.
– JimmidyJoo
Mar 10 '15 at 15:24
The answer is that the unit tests should be independent of each other so that you can run and debug them in isolation.
– JeremyP
Sep 10 '15 at 12:46
11
Unit tests should be independent, true. Or better said, they should be able to be run independently for many good reasons. But, I write functional tests, integration tests, and system tests with the unittest framework as well, and these would be unfeasible to run without ordering them since system state MATTERS in integration tests!
– Rob Hunter
Apr 12 '16 at 18:04
Can you provide an example of how to set the test execution order?
– Steven M. Vascellaro
Nov 6 '17 at 19:08
add a comment |
There is no reason given that you can't build on what was done in a previous test or should rebuild it all from scratch for the next test. At least no reason is usually offered but instead people just confidently say "you shouldn't". That isn't helpful.
In general I am tired of reading too many answers here that say basically "you shouldn't do that" instead of giving any information on how to best do it if in the questioners judgment there is good reason to do so. If I wanted someone's opinion on whether I should do something then I would have asked for opinions on whether doing it is a good idea.
That out of the way, if you read say loadTestsFromTestCase and what it calls it ultimately scans for methods with some name pattern in whatever order they are encountered in the classes method dictionary, so basically in key order. It take this information and makes a testsuite of mapping it to the TestCase class. Giving it instead a list ordered as you would like is one way to do this. I am not so sure of the most efficient/cleanest way to do it but this does work.
3
I agree with your remarks about unhelpful "don't do that" comments without explanations, but having said that there are genuine reasons why it's not a good idea to have dependencies between tests. Chief among them is it is nice to have tests fail because a particular thing has broken and not because there's some unclear, undocumented link between the test you're running and some other test which you're not. If you never run isolated tests then that's fine, but being able to run individual tests is helpful in some circumstances, and this is not possible where they depend on each other.
– JimmidyJoo
Mar 10 '15 at 15:24
The answer is that the unit tests should be independent of each other so that you can run and debug them in isolation.
– JeremyP
Sep 10 '15 at 12:46
11
Unit tests should be independent, true. Or better said, they should be able to be run independently for many good reasons. But, I write functional tests, integration tests, and system tests with the unittest framework as well, and these would be unfeasible to run without ordering them since system state MATTERS in integration tests!
– Rob Hunter
Apr 12 '16 at 18:04
Can you provide an example of how to set the test execution order?
– Steven M. Vascellaro
Nov 6 '17 at 19:08
add a comment |
There is no reason given that you can't build on what was done in a previous test or should rebuild it all from scratch for the next test. At least no reason is usually offered but instead people just confidently say "you shouldn't". That isn't helpful.
In general I am tired of reading too many answers here that say basically "you shouldn't do that" instead of giving any information on how to best do it if in the questioners judgment there is good reason to do so. If I wanted someone's opinion on whether I should do something then I would have asked for opinions on whether doing it is a good idea.
That out of the way, if you read say loadTestsFromTestCase and what it calls it ultimately scans for methods with some name pattern in whatever order they are encountered in the classes method dictionary, so basically in key order. It take this information and makes a testsuite of mapping it to the TestCase class. Giving it instead a list ordered as you would like is one way to do this. I am not so sure of the most efficient/cleanest way to do it but this does work.
There is no reason given that you can't build on what was done in a previous test or should rebuild it all from scratch for the next test. At least no reason is usually offered but instead people just confidently say "you shouldn't". That isn't helpful.
In general I am tired of reading too many answers here that say basically "you shouldn't do that" instead of giving any information on how to best do it if in the questioners judgment there is good reason to do so. If I wanted someone's opinion on whether I should do something then I would have asked for opinions on whether doing it is a good idea.
That out of the way, if you read say loadTestsFromTestCase and what it calls it ultimately scans for methods with some name pattern in whatever order they are encountered in the classes method dictionary, so basically in key order. It take this information and makes a testsuite of mapping it to the TestCase class. Giving it instead a list ordered as you would like is one way to do this. I am not so sure of the most efficient/cleanest way to do it but this does work.
answered Nov 7 '12 at 2:17
Seren Seraph
68552
68552
3
I agree with your remarks about unhelpful "don't do that" comments without explanations, but having said that there are genuine reasons why it's not a good idea to have dependencies between tests. Chief among them is it is nice to have tests fail because a particular thing has broken and not because there's some unclear, undocumented link between the test you're running and some other test which you're not. If you never run isolated tests then that's fine, but being able to run individual tests is helpful in some circumstances, and this is not possible where they depend on each other.
– JimmidyJoo
Mar 10 '15 at 15:24
The answer is that the unit tests should be independent of each other so that you can run and debug them in isolation.
– JeremyP
Sep 10 '15 at 12:46
11
Unit tests should be independent, true. Or better said, they should be able to be run independently for many good reasons. But, I write functional tests, integration tests, and system tests with the unittest framework as well, and these would be unfeasible to run without ordering them since system state MATTERS in integration tests!
– Rob Hunter
Apr 12 '16 at 18:04
Can you provide an example of how to set the test execution order?
– Steven M. Vascellaro
Nov 6 '17 at 19:08
add a comment |
3
I agree with your remarks about unhelpful "don't do that" comments without explanations, but having said that there are genuine reasons why it's not a good idea to have dependencies between tests. Chief among them is it is nice to have tests fail because a particular thing has broken and not because there's some unclear, undocumented link between the test you're running and some other test which you're not. If you never run isolated tests then that's fine, but being able to run individual tests is helpful in some circumstances, and this is not possible where they depend on each other.
– JimmidyJoo
Mar 10 '15 at 15:24
The answer is that the unit tests should be independent of each other so that you can run and debug them in isolation.
– JeremyP
Sep 10 '15 at 12:46
11
Unit tests should be independent, true. Or better said, they should be able to be run independently for many good reasons. But, I write functional tests, integration tests, and system tests with the unittest framework as well, and these would be unfeasible to run without ordering them since system state MATTERS in integration tests!
– Rob Hunter
Apr 12 '16 at 18:04
Can you provide an example of how to set the test execution order?
– Steven M. Vascellaro
Nov 6 '17 at 19:08
3
3
I agree with your remarks about unhelpful "don't do that" comments without explanations, but having said that there are genuine reasons why it's not a good idea to have dependencies between tests. Chief among them is it is nice to have tests fail because a particular thing has broken and not because there's some unclear, undocumented link between the test you're running and some other test which you're not. If you never run isolated tests then that's fine, but being able to run individual tests is helpful in some circumstances, and this is not possible where they depend on each other.
– JimmidyJoo
Mar 10 '15 at 15:24
I agree with your remarks about unhelpful "don't do that" comments without explanations, but having said that there are genuine reasons why it's not a good idea to have dependencies between tests. Chief among them is it is nice to have tests fail because a particular thing has broken and not because there's some unclear, undocumented link between the test you're running and some other test which you're not. If you never run isolated tests then that's fine, but being able to run individual tests is helpful in some circumstances, and this is not possible where they depend on each other.
– JimmidyJoo
Mar 10 '15 at 15:24
The answer is that the unit tests should be independent of each other so that you can run and debug them in isolation.
– JeremyP
Sep 10 '15 at 12:46
The answer is that the unit tests should be independent of each other so that you can run and debug them in isolation.
– JeremyP
Sep 10 '15 at 12:46
11
11
Unit tests should be independent, true. Or better said, they should be able to be run independently for many good reasons. But, I write functional tests, integration tests, and system tests with the unittest framework as well, and these would be unfeasible to run without ordering them since system state MATTERS in integration tests!
– Rob Hunter
Apr 12 '16 at 18:04
Unit tests should be independent, true. Or better said, they should be able to be run independently for many good reasons. But, I write functional tests, integration tests, and system tests with the unittest framework as well, and these would be unfeasible to run without ordering them since system state MATTERS in integration tests!
– Rob Hunter
Apr 12 '16 at 18:04
Can you provide an example of how to set the test execution order?
– Steven M. Vascellaro
Nov 6 '17 at 19:08
Can you provide an example of how to set the test execution order?
– Steven M. Vascellaro
Nov 6 '17 at 19:08
add a comment |
Why do you need specific test order? The tests should be isolated and therefore it should be possible to run them in any order, or even in parallel.
If you need to test something like user unsubscribing, the test could create a fresh database with a test subscription and then try to unsubscribe. This scenario has its own problems, but in the end it’s better than having tests depend on each other. (Note that you can factor out common test code, so that you don’t have to repeat the DB setup code or create testing data ad nauseam.)
2
It might be difficult to run them in parallel if they access a database (which is mostly the case with django)
– Antoine Pelisse
Nov 4 '10 at 9:38
17
Each test is the continuation of the previous. Here is simple example of tests order. testing user subscribing, testing disabling of the subscribing, testing unsubscribing of the subscribed and disabled subscription. I must to do all the things tested in the previous test again if tests are not ordered. Is it wrong way?
– nmb.ten
Nov 4 '10 at 9:51
2
@MitchellModel Django uses transactions to roll back changes to the database between tests. Your second test should not see the modifications to the database created in the first test. (If you are, your view is probably using transactions - you should be using Django's TransactionTestCase instead of TestCase for that view)
– Izkata
Nov 28 '11 at 19:50
5
One reason I can think of is when two tests don't depend on one another, but the components they are testing do. Imagine testing a class B which is a subclass of A. If A has issues, it will fail B tests too. It would be nicer to get errors related to A test first. But overall, it shouldn't make a big difference really.
– Mansour
Feb 6 '12 at 17:02
5
For debugging, it makes lots of sense to have the (independent) tests ordered from simple to complex.
– Michael Clerx
Oct 22 '12 at 11:27
|
show 10 more comments
Why do you need specific test order? The tests should be isolated and therefore it should be possible to run them in any order, or even in parallel.
If you need to test something like user unsubscribing, the test could create a fresh database with a test subscription and then try to unsubscribe. This scenario has its own problems, but in the end it’s better than having tests depend on each other. (Note that you can factor out common test code, so that you don’t have to repeat the DB setup code or create testing data ad nauseam.)
2
It might be difficult to run them in parallel if they access a database (which is mostly the case with django)
– Antoine Pelisse
Nov 4 '10 at 9:38
17
Each test is the continuation of the previous. Here is simple example of tests order. testing user subscribing, testing disabling of the subscribing, testing unsubscribing of the subscribed and disabled subscription. I must to do all the things tested in the previous test again if tests are not ordered. Is it wrong way?
– nmb.ten
Nov 4 '10 at 9:51
2
@MitchellModel Django uses transactions to roll back changes to the database between tests. Your second test should not see the modifications to the database created in the first test. (If you are, your view is probably using transactions - you should be using Django's TransactionTestCase instead of TestCase for that view)
– Izkata
Nov 28 '11 at 19:50
5
One reason I can think of is when two tests don't depend on one another, but the components they are testing do. Imagine testing a class B which is a subclass of A. If A has issues, it will fail B tests too. It would be nicer to get errors related to A test first. But overall, it shouldn't make a big difference really.
– Mansour
Feb 6 '12 at 17:02
5
For debugging, it makes lots of sense to have the (independent) tests ordered from simple to complex.
– Michael Clerx
Oct 22 '12 at 11:27
|
show 10 more comments
Why do you need specific test order? The tests should be isolated and therefore it should be possible to run them in any order, or even in parallel.
If you need to test something like user unsubscribing, the test could create a fresh database with a test subscription and then try to unsubscribe. This scenario has its own problems, but in the end it’s better than having tests depend on each other. (Note that you can factor out common test code, so that you don’t have to repeat the DB setup code or create testing data ad nauseam.)
Why do you need specific test order? The tests should be isolated and therefore it should be possible to run them in any order, or even in parallel.
If you need to test something like user unsubscribing, the test could create a fresh database with a test subscription and then try to unsubscribe. This scenario has its own problems, but in the end it’s better than having tests depend on each other. (Note that you can factor out common test code, so that you don’t have to repeat the DB setup code or create testing data ad nauseam.)
edited Nov 4 '10 at 10:26
answered Nov 4 '10 at 9:34
zoul
77.9k36218322
77.9k36218322
2
It might be difficult to run them in parallel if they access a database (which is mostly the case with django)
– Antoine Pelisse
Nov 4 '10 at 9:38
17
Each test is the continuation of the previous. Here is simple example of tests order. testing user subscribing, testing disabling of the subscribing, testing unsubscribing of the subscribed and disabled subscription. I must to do all the things tested in the previous test again if tests are not ordered. Is it wrong way?
– nmb.ten
Nov 4 '10 at 9:51
2
@MitchellModel Django uses transactions to roll back changes to the database between tests. Your second test should not see the modifications to the database created in the first test. (If you are, your view is probably using transactions - you should be using Django's TransactionTestCase instead of TestCase for that view)
– Izkata
Nov 28 '11 at 19:50
5
One reason I can think of is when two tests don't depend on one another, but the components they are testing do. Imagine testing a class B which is a subclass of A. If A has issues, it will fail B tests too. It would be nicer to get errors related to A test first. But overall, it shouldn't make a big difference really.
– Mansour
Feb 6 '12 at 17:02
5
For debugging, it makes lots of sense to have the (independent) tests ordered from simple to complex.
– Michael Clerx
Oct 22 '12 at 11:27
|
show 10 more comments
2
It might be difficult to run them in parallel if they access a database (which is mostly the case with django)
– Antoine Pelisse
Nov 4 '10 at 9:38
17
Each test is the continuation of the previous. Here is simple example of tests order. testing user subscribing, testing disabling of the subscribing, testing unsubscribing of the subscribed and disabled subscription. I must to do all the things tested in the previous test again if tests are not ordered. Is it wrong way?
– nmb.ten
Nov 4 '10 at 9:51
2
@MitchellModel Django uses transactions to roll back changes to the database between tests. Your second test should not see the modifications to the database created in the first test. (If you are, your view is probably using transactions - you should be using Django's TransactionTestCase instead of TestCase for that view)
– Izkata
Nov 28 '11 at 19:50
5
One reason I can think of is when two tests don't depend on one another, but the components they are testing do. Imagine testing a class B which is a subclass of A. If A has issues, it will fail B tests too. It would be nicer to get errors related to A test first. But overall, it shouldn't make a big difference really.
– Mansour
Feb 6 '12 at 17:02
5
For debugging, it makes lots of sense to have the (independent) tests ordered from simple to complex.
– Michael Clerx
Oct 22 '12 at 11:27
2
2
It might be difficult to run them in parallel if they access a database (which is mostly the case with django)
– Antoine Pelisse
Nov 4 '10 at 9:38
It might be difficult to run them in parallel if they access a database (which is mostly the case with django)
– Antoine Pelisse
Nov 4 '10 at 9:38
17
17
Each test is the continuation of the previous. Here is simple example of tests order. testing user subscribing, testing disabling of the subscribing, testing unsubscribing of the subscribed and disabled subscription. I must to do all the things tested in the previous test again if tests are not ordered. Is it wrong way?
– nmb.ten
Nov 4 '10 at 9:51
Each test is the continuation of the previous. Here is simple example of tests order. testing user subscribing, testing disabling of the subscribing, testing unsubscribing of the subscribed and disabled subscription. I must to do all the things tested in the previous test again if tests are not ordered. Is it wrong way?
– nmb.ten
Nov 4 '10 at 9:51
2
2
@MitchellModel Django uses transactions to roll back changes to the database between tests. Your second test should not see the modifications to the database created in the first test. (If you are, your view is probably using transactions - you should be using Django's TransactionTestCase instead of TestCase for that view)
– Izkata
Nov 28 '11 at 19:50
@MitchellModel Django uses transactions to roll back changes to the database between tests. Your second test should not see the modifications to the database created in the first test. (If you are, your view is probably using transactions - you should be using Django's TransactionTestCase instead of TestCase for that view)
– Izkata
Nov 28 '11 at 19:50
5
5
One reason I can think of is when two tests don't depend on one another, but the components they are testing do. Imagine testing a class B which is a subclass of A. If A has issues, it will fail B tests too. It would be nicer to get errors related to A test first. But overall, it shouldn't make a big difference really.
– Mansour
Feb 6 '12 at 17:02
One reason I can think of is when two tests don't depend on one another, but the components they are testing do. Imagine testing a class B which is a subclass of A. If A has issues, it will fail B tests too. It would be nicer to get errors related to A test first. But overall, it shouldn't make a big difference really.
– Mansour
Feb 6 '12 at 17:02
5
5
For debugging, it makes lots of sense to have the (independent) tests ordered from simple to complex.
– Michael Clerx
Oct 22 '12 at 11:27
For debugging, it makes lots of sense to have the (independent) tests ordered from simple to complex.
– Michael Clerx
Oct 22 '12 at 11:27
|
show 10 more comments
If you use 'nose' and you write your test cases as functions (and not as methods of some TestCase derived class) 'nose' doesn't fiddle with the order, but uses the order of the functions as defined in the file. In order to have the assert_* methods handy without needing to subclass TestCase I usually use the testing module from numpy. Example:
from numpy.testing import *
def test_aaa():
assert_equal(1, 1)
def test_zzz():
assert_equal(1, 1)
def test_bbb():
assert_equal(1, 1)
Running that with ''nosetest -vv'' gives:
test_it.test_aaa ... ok
test_it.test_zzz ... ok
test_it.test_bbb ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.050s
OK
Note to all those who contend that unit tests shouldn't be ordered: while it is true that unit tests should be isolated and can run independently, your functions and classes are usually not independent. They rather build up on another from simpler/low-level functions to more complex/high-level functions. When you start optimising your low-level functions and mess up (for my part, I do that frequently; if you don't, you probably don't need unit test anyway;-) then it's a lot better for diagnosing the cause, when the tests for simple functions come first, and tests for functions that depend on those functions later. If the tests are sorted alphabetically the real cause usually gets drowned among one hundred failed assertions, which are not there because the function under test has a bug, but because the low-level function it relies on has.
That's why I want to have my unit tests sorted the way I specified them: not to use state that was built up in early tests in later tests, but as a very helpful tool in diagnosing problems.
1
I have a suite of a few hundred test cases and I sadly can't say that's true. It's not avoided on purpose either, sometimes it really was in this order. Also I'm not sure if it's configurable in nose somewhere, but scrolling through the help I can't make out the option either.
– erikbwork
Mar 26 '15 at 10:16
Your example works, but this doesn't work in my case as the tests are still executed alphabetically, but reading through the other answers, i realized that i have to isolate my tests properly
– danidee
Dec 12 '16 at 10:21
add a comment |
If you use 'nose' and you write your test cases as functions (and not as methods of some TestCase derived class) 'nose' doesn't fiddle with the order, but uses the order of the functions as defined in the file. In order to have the assert_* methods handy without needing to subclass TestCase I usually use the testing module from numpy. Example:
from numpy.testing import *
def test_aaa():
assert_equal(1, 1)
def test_zzz():
assert_equal(1, 1)
def test_bbb():
assert_equal(1, 1)
Running that with ''nosetest -vv'' gives:
test_it.test_aaa ... ok
test_it.test_zzz ... ok
test_it.test_bbb ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.050s
OK
Note to all those who contend that unit tests shouldn't be ordered: while it is true that unit tests should be isolated and can run independently, your functions and classes are usually not independent. They rather build up on another from simpler/low-level functions to more complex/high-level functions. When you start optimising your low-level functions and mess up (for my part, I do that frequently; if you don't, you probably don't need unit test anyway;-) then it's a lot better for diagnosing the cause, when the tests for simple functions come first, and tests for functions that depend on those functions later. If the tests are sorted alphabetically the real cause usually gets drowned among one hundred failed assertions, which are not there because the function under test has a bug, but because the low-level function it relies on has.
That's why I want to have my unit tests sorted the way I specified them: not to use state that was built up in early tests in later tests, but as a very helpful tool in diagnosing problems.
1
I have a suite of a few hundred test cases and I sadly can't say that's true. It's not avoided on purpose either, sometimes it really was in this order. Also I'm not sure if it's configurable in nose somewhere, but scrolling through the help I can't make out the option either.
– erikbwork
Mar 26 '15 at 10:16
Your example works, but this doesn't work in my case as the tests are still executed alphabetically, but reading through the other answers, i realized that i have to isolate my tests properly
– danidee
Dec 12 '16 at 10:21
add a comment |
If you use 'nose' and you write your test cases as functions (and not as methods of some TestCase derived class) 'nose' doesn't fiddle with the order, but uses the order of the functions as defined in the file. In order to have the assert_* methods handy without needing to subclass TestCase I usually use the testing module from numpy. Example:
from numpy.testing import *
def test_aaa():
assert_equal(1, 1)
def test_zzz():
assert_equal(1, 1)
def test_bbb():
assert_equal(1, 1)
Running that with ''nosetest -vv'' gives:
test_it.test_aaa ... ok
test_it.test_zzz ... ok
test_it.test_bbb ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.050s
OK
Note to all those who contend that unit tests shouldn't be ordered: while it is true that unit tests should be isolated and can run independently, your functions and classes are usually not independent. They rather build up on another from simpler/low-level functions to more complex/high-level functions. When you start optimising your low-level functions and mess up (for my part, I do that frequently; if you don't, you probably don't need unit test anyway;-) then it's a lot better for diagnosing the cause, when the tests for simple functions come first, and tests for functions that depend on those functions later. If the tests are sorted alphabetically the real cause usually gets drowned among one hundred failed assertions, which are not there because the function under test has a bug, but because the low-level function it relies on has.
That's why I want to have my unit tests sorted the way I specified them: not to use state that was built up in early tests in later tests, but as a very helpful tool in diagnosing problems.
If you use 'nose' and you write your test cases as functions (and not as methods of some TestCase derived class) 'nose' doesn't fiddle with the order, but uses the order of the functions as defined in the file. In order to have the assert_* methods handy without needing to subclass TestCase I usually use the testing module from numpy. Example:
from numpy.testing import *
def test_aaa():
assert_equal(1, 1)
def test_zzz():
assert_equal(1, 1)
def test_bbb():
assert_equal(1, 1)
Running that with ''nosetest -vv'' gives:
test_it.test_aaa ... ok
test_it.test_zzz ... ok
test_it.test_bbb ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.050s
OK
Note to all those who contend that unit tests shouldn't be ordered: while it is true that unit tests should be isolated and can run independently, your functions and classes are usually not independent. They rather build up on another from simpler/low-level functions to more complex/high-level functions. When you start optimising your low-level functions and mess up (for my part, I do that frequently; if you don't, you probably don't need unit test anyway;-) then it's a lot better for diagnosing the cause, when the tests for simple functions come first, and tests for functions that depend on those functions later. If the tests are sorted alphabetically the real cause usually gets drowned among one hundred failed assertions, which are not there because the function under test has a bug, but because the low-level function it relies on has.
That's why I want to have my unit tests sorted the way I specified them: not to use state that was built up in early tests in later tests, but as a very helpful tool in diagnosing problems.
answered Oct 27 '11 at 13:10
Elmar Zander
767820
767820
1
I have a suite of a few hundred test cases and I sadly can't say that's true. It's not avoided on purpose either, sometimes it really was in this order. Also I'm not sure if it's configurable in nose somewhere, but scrolling through the help I can't make out the option either.
– erikbwork
Mar 26 '15 at 10:16
Your example works, but this doesn't work in my case as the tests are still executed alphabetically, but reading through the other answers, i realized that i have to isolate my tests properly
– danidee
Dec 12 '16 at 10:21
add a comment |
1
I have a suite of a few hundred test cases and I sadly can't say that's true. It's not avoided on purpose either, sometimes it really was in this order. Also I'm not sure if it's configurable in nose somewhere, but scrolling through the help I can't make out the option either.
– erikbwork
Mar 26 '15 at 10:16
Your example works, but this doesn't work in my case as the tests are still executed alphabetically, but reading through the other answers, i realized that i have to isolate my tests properly
– danidee
Dec 12 '16 at 10:21
1
1
I have a suite of a few hundred test cases and I sadly can't say that's true. It's not avoided on purpose either, sometimes it really was in this order. Also I'm not sure if it's configurable in nose somewhere, but scrolling through the help I can't make out the option either.
– erikbwork
Mar 26 '15 at 10:16
I have a suite of a few hundred test cases and I sadly can't say that's true. It's not avoided on purpose either, sometimes it really was in this order. Also I'm not sure if it's configurable in nose somewhere, but scrolling through the help I can't make out the option either.
– erikbwork
Mar 26 '15 at 10:16
Your example works, but this doesn't work in my case as the tests are still executed alphabetically, but reading through the other answers, i realized that i have to isolate my tests properly
– danidee
Dec 12 '16 at 10:21
Your example works, but this doesn't work in my case as the tests are still executed alphabetically, but reading through the other answers, i realized that i have to isolate my tests properly
– danidee
Dec 12 '16 at 10:21
add a comment |
Don't rely on the order. If they use some common state like the filesystem or database, then you should create setUp
and tearDown
methods that get your environment into a testable state, then clean up after the tests have run. Each test should assume that the environment is as defined in setUp
, and should make no further assumptions.
add a comment |
Don't rely on the order. If they use some common state like the filesystem or database, then you should create setUp
and tearDown
methods that get your environment into a testable state, then clean up after the tests have run. Each test should assume that the environment is as defined in setUp
, and should make no further assumptions.
add a comment |
Don't rely on the order. If they use some common state like the filesystem or database, then you should create setUp
and tearDown
methods that get your environment into a testable state, then clean up after the tests have run. Each test should assume that the environment is as defined in setUp
, and should make no further assumptions.
Don't rely on the order. If they use some common state like the filesystem or database, then you should create setUp
and tearDown
methods that get your environment into a testable state, then clean up after the tests have run. Each test should assume that the environment is as defined in setUp
, and should make no further assumptions.
answered Nov 4 '10 at 9:54
user23743
add a comment |
add a comment |
I half agree with the idea that tests souldn't be ordered. In some cases it helps (it's easier damn it!) to have them in order... after all that's the reason for the 'unit' in UnitTest.
That said one alternative is to use mock objects to mockout and patch the items that should run before that specific code under test. You can also put a dummy function in there to monkey patch your code. For more info check out Mock, which is part of the standard library now.
Mock
Here are some YouTube videos if you haven't used Mock before.
Video 1
Video 2
Video 3
More to the point, try using class methods to structure your code, then place all the class methods in one main test method.
import unittest
import sqlite3
class MyOrderedTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.create_db()
cls.setup_draft()
cls.draft_one()
cls.draft_two()
cls.draft_three()
@classmethod
def create_db(cls):
cls.conn = sqlite3.connect(":memory:")
@classmethod
def setup_draft(cls):
cls.conn.execute("CREATE TABLE players ('draftid' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'first', 'last')")
@classmethod
def draft_one(cls):
player = ("Hakeem", "Olajuwon")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
@classmethod
def draft_two(cls):
player = ("Sam", "Bowie")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
@classmethod
def draft_three(cls):
player = ("Michael", "Jordan")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
def test_unordered_one(self):
cur = self.conn.execute("SELECT * from players")
draft = [(1, u'Hakeem', u'Olajuwon'), (2, u'Sam', u'Bowie'), (3, u'Michael', u'Jordan')]
query = cur.fetchall()
print query
self.assertListEqual(query, draft)
def test_unordered_two(self):
cur = self.conn.execute("SELECT first, last FROM players WHERE draftid=3")
result = cur.fetchone()
third = " ".join(result)
print third
self.assertEqual(third, "Michael Jordan")
add a comment |
I half agree with the idea that tests souldn't be ordered. In some cases it helps (it's easier damn it!) to have them in order... after all that's the reason for the 'unit' in UnitTest.
That said one alternative is to use mock objects to mockout and patch the items that should run before that specific code under test. You can also put a dummy function in there to monkey patch your code. For more info check out Mock, which is part of the standard library now.
Mock
Here are some YouTube videos if you haven't used Mock before.
Video 1
Video 2
Video 3
More to the point, try using class methods to structure your code, then place all the class methods in one main test method.
import unittest
import sqlite3
class MyOrderedTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.create_db()
cls.setup_draft()
cls.draft_one()
cls.draft_two()
cls.draft_three()
@classmethod
def create_db(cls):
cls.conn = sqlite3.connect(":memory:")
@classmethod
def setup_draft(cls):
cls.conn.execute("CREATE TABLE players ('draftid' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'first', 'last')")
@classmethod
def draft_one(cls):
player = ("Hakeem", "Olajuwon")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
@classmethod
def draft_two(cls):
player = ("Sam", "Bowie")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
@classmethod
def draft_three(cls):
player = ("Michael", "Jordan")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
def test_unordered_one(self):
cur = self.conn.execute("SELECT * from players")
draft = [(1, u'Hakeem', u'Olajuwon'), (2, u'Sam', u'Bowie'), (3, u'Michael', u'Jordan')]
query = cur.fetchall()
print query
self.assertListEqual(query, draft)
def test_unordered_two(self):
cur = self.conn.execute("SELECT first, last FROM players WHERE draftid=3")
result = cur.fetchone()
third = " ".join(result)
print third
self.assertEqual(third, "Michael Jordan")
add a comment |
I half agree with the idea that tests souldn't be ordered. In some cases it helps (it's easier damn it!) to have them in order... after all that's the reason for the 'unit' in UnitTest.
That said one alternative is to use mock objects to mockout and patch the items that should run before that specific code under test. You can also put a dummy function in there to monkey patch your code. For more info check out Mock, which is part of the standard library now.
Mock
Here are some YouTube videos if you haven't used Mock before.
Video 1
Video 2
Video 3
More to the point, try using class methods to structure your code, then place all the class methods in one main test method.
import unittest
import sqlite3
class MyOrderedTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.create_db()
cls.setup_draft()
cls.draft_one()
cls.draft_two()
cls.draft_three()
@classmethod
def create_db(cls):
cls.conn = sqlite3.connect(":memory:")
@classmethod
def setup_draft(cls):
cls.conn.execute("CREATE TABLE players ('draftid' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'first', 'last')")
@classmethod
def draft_one(cls):
player = ("Hakeem", "Olajuwon")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
@classmethod
def draft_two(cls):
player = ("Sam", "Bowie")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
@classmethod
def draft_three(cls):
player = ("Michael", "Jordan")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
def test_unordered_one(self):
cur = self.conn.execute("SELECT * from players")
draft = [(1, u'Hakeem', u'Olajuwon'), (2, u'Sam', u'Bowie'), (3, u'Michael', u'Jordan')]
query = cur.fetchall()
print query
self.assertListEqual(query, draft)
def test_unordered_two(self):
cur = self.conn.execute("SELECT first, last FROM players WHERE draftid=3")
result = cur.fetchone()
third = " ".join(result)
print third
self.assertEqual(third, "Michael Jordan")
I half agree with the idea that tests souldn't be ordered. In some cases it helps (it's easier damn it!) to have them in order... after all that's the reason for the 'unit' in UnitTest.
That said one alternative is to use mock objects to mockout and patch the items that should run before that specific code under test. You can also put a dummy function in there to monkey patch your code. For more info check out Mock, which is part of the standard library now.
Mock
Here are some YouTube videos if you haven't used Mock before.
Video 1
Video 2
Video 3
More to the point, try using class methods to structure your code, then place all the class methods in one main test method.
import unittest
import sqlite3
class MyOrderedTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.create_db()
cls.setup_draft()
cls.draft_one()
cls.draft_two()
cls.draft_three()
@classmethod
def create_db(cls):
cls.conn = sqlite3.connect(":memory:")
@classmethod
def setup_draft(cls):
cls.conn.execute("CREATE TABLE players ('draftid' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'first', 'last')")
@classmethod
def draft_one(cls):
player = ("Hakeem", "Olajuwon")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
@classmethod
def draft_two(cls):
player = ("Sam", "Bowie")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
@classmethod
def draft_three(cls):
player = ("Michael", "Jordan")
cls.conn.execute("INSERT INTO players (first, last) VALUES (?, ?)", player)
def test_unordered_one(self):
cur = self.conn.execute("SELECT * from players")
draft = [(1, u'Hakeem', u'Olajuwon'), (2, u'Sam', u'Bowie'), (3, u'Michael', u'Jordan')]
query = cur.fetchall()
print query
self.assertListEqual(query, draft)
def test_unordered_two(self):
cur = self.conn.execute("SELECT first, last FROM players WHERE draftid=3")
result = cur.fetchone()
third = " ".join(result)
print third
self.assertEqual(third, "Michael Jordan")
answered Aug 16 '12 at 5:56
Jason Wirth
4371815
4371815
add a comment |
add a comment |
There are a number of reasons for prioritizing tests, not the least of which is productivity, which is what JUnit Max is geared for. It's sometimes helpful to keep very slow tests in their own module so that you can get quick feedback from the those tests that that don't suffer from the same heavy dependencies. Ordering is also helpful in tracking down failures from tests that are not completely self-contained.
Completely agree.
– Purrell
May 13 '11 at 21:46
3
Sorry, but I tend to disagree. Unit tests shouldn't depend on each other, but it still often makes a lot of sense if they are executed in the order they were specified. Say, you have two functionsa
andb
andb
usesa
. Then it is much better iftest_a
is executed beforetest_b
, because ifa
contains an error you will spot that much earlier this way, instead of trying to find the bug inb
.
– Elmar Zander
Mar 13 '12 at 16:30
@ElmarZander - iftest_b
runs alsoa
, then you might have a problem with your test structure, astest_b
will end up testing not a single unitb
but two:b
anda
. You should probably mock the result ofa
in yourtest_b
instead. Unit tests ≠ integration tests.
– mac
Dec 23 '13 at 13:29
@mac Thanks, but I do know what integration tests are. What I wrote had nothing to with that, and no, I don't have a problem with my test structure. I'm just adhering to some structured design approach, where I compose more complex functions from simpler ones, and it would neither make sense nor be possible to mock every five-line-function by another five-line-function, but it does make a lot of sense to test the simpler ones before the more complex function built on top.
– Elmar Zander
Oct 16 '15 at 16:20
add a comment |
There are a number of reasons for prioritizing tests, not the least of which is productivity, which is what JUnit Max is geared for. It's sometimes helpful to keep very slow tests in their own module so that you can get quick feedback from the those tests that that don't suffer from the same heavy dependencies. Ordering is also helpful in tracking down failures from tests that are not completely self-contained.
Completely agree.
– Purrell
May 13 '11 at 21:46
3
Sorry, but I tend to disagree. Unit tests shouldn't depend on each other, but it still often makes a lot of sense if they are executed in the order they were specified. Say, you have two functionsa
andb
andb
usesa
. Then it is much better iftest_a
is executed beforetest_b
, because ifa
contains an error you will spot that much earlier this way, instead of trying to find the bug inb
.
– Elmar Zander
Mar 13 '12 at 16:30
@ElmarZander - iftest_b
runs alsoa
, then you might have a problem with your test structure, astest_b
will end up testing not a single unitb
but two:b
anda
. You should probably mock the result ofa
in yourtest_b
instead. Unit tests ≠ integration tests.
– mac
Dec 23 '13 at 13:29
@mac Thanks, but I do know what integration tests are. What I wrote had nothing to with that, and no, I don't have a problem with my test structure. I'm just adhering to some structured design approach, where I compose more complex functions from simpler ones, and it would neither make sense nor be possible to mock every five-line-function by another five-line-function, but it does make a lot of sense to test the simpler ones before the more complex function built on top.
– Elmar Zander
Oct 16 '15 at 16:20
add a comment |
There are a number of reasons for prioritizing tests, not the least of which is productivity, which is what JUnit Max is geared for. It's sometimes helpful to keep very slow tests in their own module so that you can get quick feedback from the those tests that that don't suffer from the same heavy dependencies. Ordering is also helpful in tracking down failures from tests that are not completely self-contained.
There are a number of reasons for prioritizing tests, not the least of which is productivity, which is what JUnit Max is geared for. It's sometimes helpful to keep very slow tests in their own module so that you can get quick feedback from the those tests that that don't suffer from the same heavy dependencies. Ordering is also helpful in tracking down failures from tests that are not completely self-contained.
answered May 2 '11 at 19:15
eradman
1,17011020
1,17011020
Completely agree.
– Purrell
May 13 '11 at 21:46
3
Sorry, but I tend to disagree. Unit tests shouldn't depend on each other, but it still often makes a lot of sense if they are executed in the order they were specified. Say, you have two functionsa
andb
andb
usesa
. Then it is much better iftest_a
is executed beforetest_b
, because ifa
contains an error you will spot that much earlier this way, instead of trying to find the bug inb
.
– Elmar Zander
Mar 13 '12 at 16:30
@ElmarZander - iftest_b
runs alsoa
, then you might have a problem with your test structure, astest_b
will end up testing not a single unitb
but two:b
anda
. You should probably mock the result ofa
in yourtest_b
instead. Unit tests ≠ integration tests.
– mac
Dec 23 '13 at 13:29
@mac Thanks, but I do know what integration tests are. What I wrote had nothing to with that, and no, I don't have a problem with my test structure. I'm just adhering to some structured design approach, where I compose more complex functions from simpler ones, and it would neither make sense nor be possible to mock every five-line-function by another five-line-function, but it does make a lot of sense to test the simpler ones before the more complex function built on top.
– Elmar Zander
Oct 16 '15 at 16:20
add a comment |
Completely agree.
– Purrell
May 13 '11 at 21:46
3
Sorry, but I tend to disagree. Unit tests shouldn't depend on each other, but it still often makes a lot of sense if they are executed in the order they were specified. Say, you have two functionsa
andb
andb
usesa
. Then it is much better iftest_a
is executed beforetest_b
, because ifa
contains an error you will spot that much earlier this way, instead of trying to find the bug inb
.
– Elmar Zander
Mar 13 '12 at 16:30
@ElmarZander - iftest_b
runs alsoa
, then you might have a problem with your test structure, astest_b
will end up testing not a single unitb
but two:b
anda
. You should probably mock the result ofa
in yourtest_b
instead. Unit tests ≠ integration tests.
– mac
Dec 23 '13 at 13:29
@mac Thanks, but I do know what integration tests are. What I wrote had nothing to with that, and no, I don't have a problem with my test structure. I'm just adhering to some structured design approach, where I compose more complex functions from simpler ones, and it would neither make sense nor be possible to mock every five-line-function by another five-line-function, but it does make a lot of sense to test the simpler ones before the more complex function built on top.
– Elmar Zander
Oct 16 '15 at 16:20
Completely agree.
– Purrell
May 13 '11 at 21:46
Completely agree.
– Purrell
May 13 '11 at 21:46
3
3
Sorry, but I tend to disagree. Unit tests shouldn't depend on each other, but it still often makes a lot of sense if they are executed in the order they were specified. Say, you have two functions
a
and b
and b
uses a
. Then it is much better if test_a
is executed before test_b
, because if a
contains an error you will spot that much earlier this way, instead of trying to find the bug in b
.– Elmar Zander
Mar 13 '12 at 16:30
Sorry, but I tend to disagree. Unit tests shouldn't depend on each other, but it still often makes a lot of sense if they are executed in the order they were specified. Say, you have two functions
a
and b
and b
uses a
. Then it is much better if test_a
is executed before test_b
, because if a
contains an error you will spot that much earlier this way, instead of trying to find the bug in b
.– Elmar Zander
Mar 13 '12 at 16:30
@ElmarZander - if
test_b
runs also a
, then you might have a problem with your test structure, as test_b
will end up testing not a single unit b
but two: b
and a
. You should probably mock the result of a
in your test_b
instead. Unit tests ≠ integration tests.– mac
Dec 23 '13 at 13:29
@ElmarZander - if
test_b
runs also a
, then you might have a problem with your test structure, as test_b
will end up testing not a single unit b
but two: b
and a
. You should probably mock the result of a
in your test_b
instead. Unit tests ≠ integration tests.– mac
Dec 23 '13 at 13:29
@mac Thanks, but I do know what integration tests are. What I wrote had nothing to with that, and no, I don't have a problem with my test structure. I'm just adhering to some structured design approach, where I compose more complex functions from simpler ones, and it would neither make sense nor be possible to mock every five-line-function by another five-line-function, but it does make a lot of sense to test the simpler ones before the more complex function built on top.
– Elmar Zander
Oct 16 '15 at 16:20
@mac Thanks, but I do know what integration tests are. What I wrote had nothing to with that, and no, I don't have a problem with my test structure. I'm just adhering to some structured design approach, where I compose more complex functions from simpler ones, and it would neither make sense nor be possible to mock every five-line-function by another five-line-function, but it does make a lot of sense to test the simpler ones before the more complex function built on top.
– Elmar Zander
Oct 16 '15 at 16:20
add a comment |
Ok, may be a bit later, but anyway...
You should try proboscis library. It will allow you to make tests order as well as set up any test dependencies. I use it and this library is awesome truly.
For example, if test case #1
from module A
should depend on test case #3
from module B
you CAN set this behaviour using the library.
add a comment |
Ok, may be a bit later, but anyway...
You should try proboscis library. It will allow you to make tests order as well as set up any test dependencies. I use it and this library is awesome truly.
For example, if test case #1
from module A
should depend on test case #3
from module B
you CAN set this behaviour using the library.
add a comment |
Ok, may be a bit later, but anyway...
You should try proboscis library. It will allow you to make tests order as well as set up any test dependencies. I use it and this library is awesome truly.
For example, if test case #1
from module A
should depend on test case #3
from module B
you CAN set this behaviour using the library.
Ok, may be a bit later, but anyway...
You should try proboscis library. It will allow you to make tests order as well as set up any test dependencies. I use it and this library is awesome truly.
For example, if test case #1
from module A
should depend on test case #3
from module B
you CAN set this behaviour using the library.
answered Feb 7 '13 at 14:02
gahcep
3,43622254
3,43622254
add a comment |
add a comment |
http://docs.python.org/library/unittest.html
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings.
If you need set order explicitly, use a monolithic test.
class Monolithic(TestCase):
def step1(self):
...
def step2(self):
...
def steps(self):
for name in sorted(dir(self)):
if name.startswith("step"):
yield name, getattr(self, name)
def test_steps(self):
for name, step in self.steps():
try:
step()
except Exception as e:
self.fail("{} failed ({}: {})".format(step, type(e), e)
Check out post for details.
add a comment |
http://docs.python.org/library/unittest.html
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings.
If you need set order explicitly, use a monolithic test.
class Monolithic(TestCase):
def step1(self):
...
def step2(self):
...
def steps(self):
for name in sorted(dir(self)):
if name.startswith("step"):
yield name, getattr(self, name)
def test_steps(self):
for name, step in self.steps():
try:
step()
except Exception as e:
self.fail("{} failed ({}: {})".format(step, type(e), e)
Check out post for details.
add a comment |
http://docs.python.org/library/unittest.html
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings.
If you need set order explicitly, use a monolithic test.
class Monolithic(TestCase):
def step1(self):
...
def step2(self):
...
def steps(self):
for name in sorted(dir(self)):
if name.startswith("step"):
yield name, getattr(self, name)
def test_steps(self):
for name, step in self.steps():
try:
step()
except Exception as e:
self.fail("{} failed ({}: {})".format(step, type(e), e)
Check out post for details.
http://docs.python.org/library/unittest.html
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings.
If you need set order explicitly, use a monolithic test.
class Monolithic(TestCase):
def step1(self):
...
def step2(self):
...
def steps(self):
for name in sorted(dir(self)):
if name.startswith("step"):
yield name, getattr(self, name)
def test_steps(self):
for name, step in self.steps():
try:
step()
except Exception as e:
self.fail("{} failed ({}: {})".format(step, type(e), e)
Check out post for details.
edited May 23 '17 at 12:10
Community♦
11
11
answered Aug 26 '15 at 13:48
morsik
699815
699815
add a comment |
add a comment |
There are scenarios where the order can be important and where setUp and Teardown come in as to limited. There's only one setUp and tearDown method, which is logical but you can only put so much information in them until it get's unclear what setUp or tearDown might actually be doing.
Take this integration test as an example:
You are writing tests to see if the registration form and the login form are working correctly. In such a case the order is important, as you can't login without an existing account.
More importantly the order of your tests represents some kind of user interaction. Where each test might represent a step in the whole process or flow you're testing.
Dividing your code in those logical pieces has several advantages.
Might not be the best solution but, I often use one method that kicks off the actual tests.
def test_registration_login_flow(self):
_test_registration_flow()
_test_login_flow()
add a comment |
There are scenarios where the order can be important and where setUp and Teardown come in as to limited. There's only one setUp and tearDown method, which is logical but you can only put so much information in them until it get's unclear what setUp or tearDown might actually be doing.
Take this integration test as an example:
You are writing tests to see if the registration form and the login form are working correctly. In such a case the order is important, as you can't login without an existing account.
More importantly the order of your tests represents some kind of user interaction. Where each test might represent a step in the whole process or flow you're testing.
Dividing your code in those logical pieces has several advantages.
Might not be the best solution but, I often use one method that kicks off the actual tests.
def test_registration_login_flow(self):
_test_registration_flow()
_test_login_flow()
add a comment |
There are scenarios where the order can be important and where setUp and Teardown come in as to limited. There's only one setUp and tearDown method, which is logical but you can only put so much information in them until it get's unclear what setUp or tearDown might actually be doing.
Take this integration test as an example:
You are writing tests to see if the registration form and the login form are working correctly. In such a case the order is important, as you can't login without an existing account.
More importantly the order of your tests represents some kind of user interaction. Where each test might represent a step in the whole process or flow you're testing.
Dividing your code in those logical pieces has several advantages.
Might not be the best solution but, I often use one method that kicks off the actual tests.
def test_registration_login_flow(self):
_test_registration_flow()
_test_login_flow()
There are scenarios where the order can be important and where setUp and Teardown come in as to limited. There's only one setUp and tearDown method, which is logical but you can only put so much information in them until it get's unclear what setUp or tearDown might actually be doing.
Take this integration test as an example:
You are writing tests to see if the registration form and the login form are working correctly. In such a case the order is important, as you can't login without an existing account.
More importantly the order of your tests represents some kind of user interaction. Where each test might represent a step in the whole process or flow you're testing.
Dividing your code in those logical pieces has several advantages.
Might not be the best solution but, I often use one method that kicks off the actual tests.
def test_registration_login_flow(self):
_test_registration_flow()
_test_login_flow()
edited Apr 18 '16 at 15:42
Rob Hunter
1,67432448
1,67432448
answered May 14 '12 at 22:09
Jonas Geiregat
2,91412947
2,91412947
add a comment |
add a comment |
A simple method for ordering "unittest" tests is to follow the init.d mechanism of giving them numeric names:
def test_00_createEmptyObject(self):
obj = MyObject()
self.assertIsEqual(obj.property1, 0)
self.assertIsEqual(obj.dict1, {})
def test_01_createObject(self):
obj = MyObject(property1="hello", dict1={"pizza":"pepperoni"})
self.assertIsEqual(obj.property1, "hello")
self.assertIsDictEqual(obj.dict1, {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = MyObject(property1="world")
obj.reverseProperty1()
self.assertIsEqual(obj.property1, "dlrow")
However, in such cases, you might want to consider structuring your tests differently so that you can build on previous construction cases. For instance, in the above, it might make sense to have a "construct and veirfy" function that constructs the object and validates it's assignment of parameters.
def make_myobject(self, property1, dict1): # Must be specified by caller
obj = MyObject(property1=property1, dict1=dict1)
if property1:
self.assertEqual(obj.property1, property1)
else:
self.assertEqual(obj.property1, 0)
if dict1:
self.assertDictEqual(obj.dict1, dict1)
else:
self.assertEqual(obj.dict1, {})
return obj
def test_00_createEmptyObject(self):
obj = self.make_object(None, None)
def test_01_createObject(self):
obj = self.make_object("hello", {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = self.make_object("world", None)
obj.reverseProperty()
self.assertEqual(obj.property1, "dlrow")
add a comment |
A simple method for ordering "unittest" tests is to follow the init.d mechanism of giving them numeric names:
def test_00_createEmptyObject(self):
obj = MyObject()
self.assertIsEqual(obj.property1, 0)
self.assertIsEqual(obj.dict1, {})
def test_01_createObject(self):
obj = MyObject(property1="hello", dict1={"pizza":"pepperoni"})
self.assertIsEqual(obj.property1, "hello")
self.assertIsDictEqual(obj.dict1, {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = MyObject(property1="world")
obj.reverseProperty1()
self.assertIsEqual(obj.property1, "dlrow")
However, in such cases, you might want to consider structuring your tests differently so that you can build on previous construction cases. For instance, in the above, it might make sense to have a "construct and veirfy" function that constructs the object and validates it's assignment of parameters.
def make_myobject(self, property1, dict1): # Must be specified by caller
obj = MyObject(property1=property1, dict1=dict1)
if property1:
self.assertEqual(obj.property1, property1)
else:
self.assertEqual(obj.property1, 0)
if dict1:
self.assertDictEqual(obj.dict1, dict1)
else:
self.assertEqual(obj.dict1, {})
return obj
def test_00_createEmptyObject(self):
obj = self.make_object(None, None)
def test_01_createObject(self):
obj = self.make_object("hello", {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = self.make_object("world", None)
obj.reverseProperty()
self.assertEqual(obj.property1, "dlrow")
add a comment |
A simple method for ordering "unittest" tests is to follow the init.d mechanism of giving them numeric names:
def test_00_createEmptyObject(self):
obj = MyObject()
self.assertIsEqual(obj.property1, 0)
self.assertIsEqual(obj.dict1, {})
def test_01_createObject(self):
obj = MyObject(property1="hello", dict1={"pizza":"pepperoni"})
self.assertIsEqual(obj.property1, "hello")
self.assertIsDictEqual(obj.dict1, {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = MyObject(property1="world")
obj.reverseProperty1()
self.assertIsEqual(obj.property1, "dlrow")
However, in such cases, you might want to consider structuring your tests differently so that you can build on previous construction cases. For instance, in the above, it might make sense to have a "construct and veirfy" function that constructs the object and validates it's assignment of parameters.
def make_myobject(self, property1, dict1): # Must be specified by caller
obj = MyObject(property1=property1, dict1=dict1)
if property1:
self.assertEqual(obj.property1, property1)
else:
self.assertEqual(obj.property1, 0)
if dict1:
self.assertDictEqual(obj.dict1, dict1)
else:
self.assertEqual(obj.dict1, {})
return obj
def test_00_createEmptyObject(self):
obj = self.make_object(None, None)
def test_01_createObject(self):
obj = self.make_object("hello", {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = self.make_object("world", None)
obj.reverseProperty()
self.assertEqual(obj.property1, "dlrow")
A simple method for ordering "unittest" tests is to follow the init.d mechanism of giving them numeric names:
def test_00_createEmptyObject(self):
obj = MyObject()
self.assertIsEqual(obj.property1, 0)
self.assertIsEqual(obj.dict1, {})
def test_01_createObject(self):
obj = MyObject(property1="hello", dict1={"pizza":"pepperoni"})
self.assertIsEqual(obj.property1, "hello")
self.assertIsDictEqual(obj.dict1, {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = MyObject(property1="world")
obj.reverseProperty1()
self.assertIsEqual(obj.property1, "dlrow")
However, in such cases, you might want to consider structuring your tests differently so that you can build on previous construction cases. For instance, in the above, it might make sense to have a "construct and veirfy" function that constructs the object and validates it's assignment of parameters.
def make_myobject(self, property1, dict1): # Must be specified by caller
obj = MyObject(property1=property1, dict1=dict1)
if property1:
self.assertEqual(obj.property1, property1)
else:
self.assertEqual(obj.property1, 0)
if dict1:
self.assertDictEqual(obj.dict1, dict1)
else:
self.assertEqual(obj.dict1, {})
return obj
def test_00_createEmptyObject(self):
obj = self.make_object(None, None)
def test_01_createObject(self):
obj = self.make_object("hello", {"pizza":"pepperoni"})
def test_10_reverseProperty(self):
obj = self.make_object("world", None)
obj.reverseProperty()
self.assertEqual(obj.property1, "dlrow")
answered Mar 31 '17 at 0:35
kfsone
19k22555
19k22555
add a comment |
add a comment |
I agree with the statement that a blanket "don't do that" answer is a bad response.
I have a similar situation where I have a single data source and one test will wipe the data set causing other tests to fail.
My solution was to use the operating system environment variables in my Bamboo server...
(1) The test for the "data purge" functionality starts with a while loop that checks the state of an environment variable "BLOCK_DATA_PURGE." If the "BLOCK_DATA_PURGE" variable is greater than zero, the loop will write a log entry to the effect that it is sleeping 1 second. Once the "BLOCK_DATA_PURGE" has a zero value, execution proceeds to test the purge functionality.
(2) Any unit test which needs the data in the table simply increments "BLOCK_DATA_PURGE" at the beginning (in setup()) and decrements the same variable in teardown().
The effect of this is to allow various data consumers to block the purge functionality so long as they need without fear that the purge could execute in between tests. Effectively the purge operation is pushed to the last step...or at least the last step that requires the original data set.
Today I am going to extend this to add more functionality to allow some tests to REQUIRE_DATA_PURGE. These will effectively invert the above process to ensure that those tests only execute after the data purge to test data restoration.
Kudos for the first paragraph. "Don't do that" always comes from some kind of inexperience, i.e. a developer who has never had to automate integration or user acceptance testing.
– pmneve
Dec 12 '17 at 17:50
add a comment |
I agree with the statement that a blanket "don't do that" answer is a bad response.
I have a similar situation where I have a single data source and one test will wipe the data set causing other tests to fail.
My solution was to use the operating system environment variables in my Bamboo server...
(1) The test for the "data purge" functionality starts with a while loop that checks the state of an environment variable "BLOCK_DATA_PURGE." If the "BLOCK_DATA_PURGE" variable is greater than zero, the loop will write a log entry to the effect that it is sleeping 1 second. Once the "BLOCK_DATA_PURGE" has a zero value, execution proceeds to test the purge functionality.
(2) Any unit test which needs the data in the table simply increments "BLOCK_DATA_PURGE" at the beginning (in setup()) and decrements the same variable in teardown().
The effect of this is to allow various data consumers to block the purge functionality so long as they need without fear that the purge could execute in between tests. Effectively the purge operation is pushed to the last step...or at least the last step that requires the original data set.
Today I am going to extend this to add more functionality to allow some tests to REQUIRE_DATA_PURGE. These will effectively invert the above process to ensure that those tests only execute after the data purge to test data restoration.
Kudos for the first paragraph. "Don't do that" always comes from some kind of inexperience, i.e. a developer who has never had to automate integration or user acceptance testing.
– pmneve
Dec 12 '17 at 17:50
add a comment |
I agree with the statement that a blanket "don't do that" answer is a bad response.
I have a similar situation where I have a single data source and one test will wipe the data set causing other tests to fail.
My solution was to use the operating system environment variables in my Bamboo server...
(1) The test for the "data purge" functionality starts with a while loop that checks the state of an environment variable "BLOCK_DATA_PURGE." If the "BLOCK_DATA_PURGE" variable is greater than zero, the loop will write a log entry to the effect that it is sleeping 1 second. Once the "BLOCK_DATA_PURGE" has a zero value, execution proceeds to test the purge functionality.
(2) Any unit test which needs the data in the table simply increments "BLOCK_DATA_PURGE" at the beginning (in setup()) and decrements the same variable in teardown().
The effect of this is to allow various data consumers to block the purge functionality so long as they need without fear that the purge could execute in between tests. Effectively the purge operation is pushed to the last step...or at least the last step that requires the original data set.
Today I am going to extend this to add more functionality to allow some tests to REQUIRE_DATA_PURGE. These will effectively invert the above process to ensure that those tests only execute after the data purge to test data restoration.
I agree with the statement that a blanket "don't do that" answer is a bad response.
I have a similar situation where I have a single data source and one test will wipe the data set causing other tests to fail.
My solution was to use the operating system environment variables in my Bamboo server...
(1) The test for the "data purge" functionality starts with a while loop that checks the state of an environment variable "BLOCK_DATA_PURGE." If the "BLOCK_DATA_PURGE" variable is greater than zero, the loop will write a log entry to the effect that it is sleeping 1 second. Once the "BLOCK_DATA_PURGE" has a zero value, execution proceeds to test the purge functionality.
(2) Any unit test which needs the data in the table simply increments "BLOCK_DATA_PURGE" at the beginning (in setup()) and decrements the same variable in teardown().
The effect of this is to allow various data consumers to block the purge functionality so long as they need without fear that the purge could execute in between tests. Effectively the purge operation is pushed to the last step...or at least the last step that requires the original data set.
Today I am going to extend this to add more functionality to allow some tests to REQUIRE_DATA_PURGE. These will effectively invert the above process to ensure that those tests only execute after the data purge to test data restoration.
edited Jun 19 '18 at 22:04
Rob Rose
351723
351723
answered Nov 6 '16 at 15:31
kingsisyphus
963
963
Kudos for the first paragraph. "Don't do that" always comes from some kind of inexperience, i.e. a developer who has never had to automate integration or user acceptance testing.
– pmneve
Dec 12 '17 at 17:50
add a comment |
Kudos for the first paragraph. "Don't do that" always comes from some kind of inexperience, i.e. a developer who has never had to automate integration or user acceptance testing.
– pmneve
Dec 12 '17 at 17:50
Kudos for the first paragraph. "Don't do that" always comes from some kind of inexperience, i.e. a developer who has never had to automate integration or user acceptance testing.
– pmneve
Dec 12 '17 at 17:50
Kudos for the first paragraph. "Don't do that" always comes from some kind of inexperience, i.e. a developer who has never had to automate integration or user acceptance testing.
– pmneve
Dec 12 '17 at 17:50
add a comment |
See the example of WidgetTestCase
on http://docs.python.org/library/unittest.html#organizing-test-code , it says that
Class instances will now each run one of the test_*() methods, with self.widget created and destroyed separately for each instance.
So it might be of no use to specify the order of test cases, if you do not access global variables.
add a comment |
See the example of WidgetTestCase
on http://docs.python.org/library/unittest.html#organizing-test-code , it says that
Class instances will now each run one of the test_*() methods, with self.widget created and destroyed separately for each instance.
So it might be of no use to specify the order of test cases, if you do not access global variables.
add a comment |
See the example of WidgetTestCase
on http://docs.python.org/library/unittest.html#organizing-test-code , it says that
Class instances will now each run one of the test_*() methods, with self.widget created and destroyed separately for each instance.
So it might be of no use to specify the order of test cases, if you do not access global variables.
See the example of WidgetTestCase
on http://docs.python.org/library/unittest.html#organizing-test-code , it says that
Class instances will now each run one of the test_*() methods, with self.widget created and destroyed separately for each instance.
So it might be of no use to specify the order of test cases, if you do not access global variables.
answered Mar 4 '12 at 9:08
jiakai
354113
354113
add a comment |
add a comment |
I have implemented a plugin nosedep for Nose which adds support for test dependencies and test prioritization.
As mentioned in the other answers/comments this is often a bad idea, however there can be exceptions where you would want to do this (in my case it was performance for integration tests - with a huge overhead for getting into a testable state, minutes vs hours).
A minimal example is:
def test_a:
pass
@depends(before=test_a)
def test_b:
pass
To ensure that test_b
is always run before test_a
.
add a comment |
I have implemented a plugin nosedep for Nose which adds support for test dependencies and test prioritization.
As mentioned in the other answers/comments this is often a bad idea, however there can be exceptions where you would want to do this (in my case it was performance for integration tests - with a huge overhead for getting into a testable state, minutes vs hours).
A minimal example is:
def test_a:
pass
@depends(before=test_a)
def test_b:
pass
To ensure that test_b
is always run before test_a
.
add a comment |
I have implemented a plugin nosedep for Nose which adds support for test dependencies and test prioritization.
As mentioned in the other answers/comments this is often a bad idea, however there can be exceptions where you would want to do this (in my case it was performance for integration tests - with a huge overhead for getting into a testable state, minutes vs hours).
A minimal example is:
def test_a:
pass
@depends(before=test_a)
def test_b:
pass
To ensure that test_b
is always run before test_a
.
I have implemented a plugin nosedep for Nose which adds support for test dependencies and test prioritization.
As mentioned in the other answers/comments this is often a bad idea, however there can be exceptions where you would want to do this (in my case it was performance for integration tests - with a huge overhead for getting into a testable state, minutes vs hours).
A minimal example is:
def test_a:
pass
@depends(before=test_a)
def test_b:
pass
To ensure that test_b
is always run before test_a
.
answered Nov 4 '18 at 19:30
Zitrax
8,813126579
8,813126579
add a comment |
add a comment |
The philosophy behind unit tests is to make them independent of each other. This means that the first step of each test will always be to try to rethink how you are testing each piece to match that philosophy. This can involve changing how you approach testing and being creative by narrowing your tests to smaller scopes.
However, if you still find that you need tests in a specific order (as that is viable), you could try checking out the answer to Python unittest.TestCase execution order .
add a comment |
The philosophy behind unit tests is to make them independent of each other. This means that the first step of each test will always be to try to rethink how you are testing each piece to match that philosophy. This can involve changing how you approach testing and being creative by narrowing your tests to smaller scopes.
However, if you still find that you need tests in a specific order (as that is viable), you could try checking out the answer to Python unittest.TestCase execution order .
add a comment |
The philosophy behind unit tests is to make them independent of each other. This means that the first step of each test will always be to try to rethink how you are testing each piece to match that philosophy. This can involve changing how you approach testing and being creative by narrowing your tests to smaller scopes.
However, if you still find that you need tests in a specific order (as that is viable), you could try checking out the answer to Python unittest.TestCase execution order .
The philosophy behind unit tests is to make them independent of each other. This means that the first step of each test will always be to try to rethink how you are testing each piece to match that philosophy. This can involve changing how you approach testing and being creative by narrowing your tests to smaller scopes.
However, if you still find that you need tests in a specific order (as that is viable), you could try checking out the answer to Python unittest.TestCase execution order .
answered Dec 28 '17 at 20:07
carrvo
265
265
add a comment |
add a comment |
Contrary to what was said here:
- tests have to run in isolation (order must not matters for that)
AND
- ordering them is important because they describe what the system do and how the developper implements it.
IOW, each test brings you informations of the system and the developper logic.
So if these informations are not ordered that can make your code difficult to understand.
There are some scenarios where tests need to run in a specific order. For example: I have an API wrapper which logs into an external server. The login needs to happen before any other unittest.
– Steven M. Vascellaro
Nov 6 '17 at 19:11
@StevenVascellaro Tests you're describing are not unit tests, the topic is about unit testing, and you NEVER have scenario where tests need to run in a specific order. It's a code smell about bad design or wrong tests. If you write such kind of tests, you'd better review what you know about testing, because to me the tests you're describing are useless and make the code hard to change. Think about it, you're talking to test an external system, which should already be tested. Focus on YOUR system that's what you're testing.
– gregorySalvan
Nov 8 '17 at 20:55
1
The topic is not about unit testing but about using unittest to automate integration testing. As a QA Test Automation engineer, my job is to do integration and browser-based 'end-to-end' testing. Unit testing (whatever the tool is) is in the domain of the developer doing test first development or doing their best to turn over clean code to the next step in testing.
– pmneve
Dec 12 '17 at 17:45
:)))) OK just this "QA Test Automation engineer" is enough to be sure that we'll never be able to discuss together, and that your code is a total mess for me. Sorry
– gregorySalvan
Dec 28 '17 at 20:37
add a comment |
Contrary to what was said here:
- tests have to run in isolation (order must not matters for that)
AND
- ordering them is important because they describe what the system do and how the developper implements it.
IOW, each test brings you informations of the system and the developper logic.
So if these informations are not ordered that can make your code difficult to understand.
There are some scenarios where tests need to run in a specific order. For example: I have an API wrapper which logs into an external server. The login needs to happen before any other unittest.
– Steven M. Vascellaro
Nov 6 '17 at 19:11
@StevenVascellaro Tests you're describing are not unit tests, the topic is about unit testing, and you NEVER have scenario where tests need to run in a specific order. It's a code smell about bad design or wrong tests. If you write such kind of tests, you'd better review what you know about testing, because to me the tests you're describing are useless and make the code hard to change. Think about it, you're talking to test an external system, which should already be tested. Focus on YOUR system that's what you're testing.
– gregorySalvan
Nov 8 '17 at 20:55
1
The topic is not about unit testing but about using unittest to automate integration testing. As a QA Test Automation engineer, my job is to do integration and browser-based 'end-to-end' testing. Unit testing (whatever the tool is) is in the domain of the developer doing test first development or doing their best to turn over clean code to the next step in testing.
– pmneve
Dec 12 '17 at 17:45
:)))) OK just this "QA Test Automation engineer" is enough to be sure that we'll never be able to discuss together, and that your code is a total mess for me. Sorry
– gregorySalvan
Dec 28 '17 at 20:37
add a comment |
Contrary to what was said here:
- tests have to run in isolation (order must not matters for that)
AND
- ordering them is important because they describe what the system do and how the developper implements it.
IOW, each test brings you informations of the system and the developper logic.
So if these informations are not ordered that can make your code difficult to understand.
Contrary to what was said here:
- tests have to run in isolation (order must not matters for that)
AND
- ordering them is important because they describe what the system do and how the developper implements it.
IOW, each test brings you informations of the system and the developper logic.
So if these informations are not ordered that can make your code difficult to understand.
answered Nov 10 '12 at 4:57
gregorySalvan
30827
30827
There are some scenarios where tests need to run in a specific order. For example: I have an API wrapper which logs into an external server. The login needs to happen before any other unittest.
– Steven M. Vascellaro
Nov 6 '17 at 19:11
@StevenVascellaro Tests you're describing are not unit tests, the topic is about unit testing, and you NEVER have scenario where tests need to run in a specific order. It's a code smell about bad design or wrong tests. If you write such kind of tests, you'd better review what you know about testing, because to me the tests you're describing are useless and make the code hard to change. Think about it, you're talking to test an external system, which should already be tested. Focus on YOUR system that's what you're testing.
– gregorySalvan
Nov 8 '17 at 20:55
1
The topic is not about unit testing but about using unittest to automate integration testing. As a QA Test Automation engineer, my job is to do integration and browser-based 'end-to-end' testing. Unit testing (whatever the tool is) is in the domain of the developer doing test first development or doing their best to turn over clean code to the next step in testing.
– pmneve
Dec 12 '17 at 17:45
:)))) OK just this "QA Test Automation engineer" is enough to be sure that we'll never be able to discuss together, and that your code is a total mess for me. Sorry
– gregorySalvan
Dec 28 '17 at 20:37
add a comment |
There are some scenarios where tests need to run in a specific order. For example: I have an API wrapper which logs into an external server. The login needs to happen before any other unittest.
– Steven M. Vascellaro
Nov 6 '17 at 19:11
@StevenVascellaro Tests you're describing are not unit tests, the topic is about unit testing, and you NEVER have scenario where tests need to run in a specific order. It's a code smell about bad design or wrong tests. If you write such kind of tests, you'd better review what you know about testing, because to me the tests you're describing are useless and make the code hard to change. Think about it, you're talking to test an external system, which should already be tested. Focus on YOUR system that's what you're testing.
– gregorySalvan
Nov 8 '17 at 20:55
1
The topic is not about unit testing but about using unittest to automate integration testing. As a QA Test Automation engineer, my job is to do integration and browser-based 'end-to-end' testing. Unit testing (whatever the tool is) is in the domain of the developer doing test first development or doing their best to turn over clean code to the next step in testing.
– pmneve
Dec 12 '17 at 17:45
:)))) OK just this "QA Test Automation engineer" is enough to be sure that we'll never be able to discuss together, and that your code is a total mess for me. Sorry
– gregorySalvan
Dec 28 '17 at 20:37
There are some scenarios where tests need to run in a specific order. For example: I have an API wrapper which logs into an external server. The login needs to happen before any other unittest.
– Steven M. Vascellaro
Nov 6 '17 at 19:11
There are some scenarios where tests need to run in a specific order. For example: I have an API wrapper which logs into an external server. The login needs to happen before any other unittest.
– Steven M. Vascellaro
Nov 6 '17 at 19:11
@StevenVascellaro Tests you're describing are not unit tests, the topic is about unit testing, and you NEVER have scenario where tests need to run in a specific order. It's a code smell about bad design or wrong tests. If you write such kind of tests, you'd better review what you know about testing, because to me the tests you're describing are useless and make the code hard to change. Think about it, you're talking to test an external system, which should already be tested. Focus on YOUR system that's what you're testing.
– gregorySalvan
Nov 8 '17 at 20:55
@StevenVascellaro Tests you're describing are not unit tests, the topic is about unit testing, and you NEVER have scenario where tests need to run in a specific order. It's a code smell about bad design or wrong tests. If you write such kind of tests, you'd better review what you know about testing, because to me the tests you're describing are useless and make the code hard to change. Think about it, you're talking to test an external system, which should already be tested. Focus on YOUR system that's what you're testing.
– gregorySalvan
Nov 8 '17 at 20:55
1
1
The topic is not about unit testing but about using unittest to automate integration testing. As a QA Test Automation engineer, my job is to do integration and browser-based 'end-to-end' testing. Unit testing (whatever the tool is) is in the domain of the developer doing test first development or doing their best to turn over clean code to the next step in testing.
– pmneve
Dec 12 '17 at 17:45
The topic is not about unit testing but about using unittest to automate integration testing. As a QA Test Automation engineer, my job is to do integration and browser-based 'end-to-end' testing. Unit testing (whatever the tool is) is in the domain of the developer doing test first development or doing their best to turn over clean code to the next step in testing.
– pmneve
Dec 12 '17 at 17:45
:)))) OK just this "QA Test Automation engineer" is enough to be sure that we'll never be able to discuss together, and that your code is a total mess for me. Sorry
– gregorySalvan
Dec 28 '17 at 20:37
:)))) OK just this "QA Test Automation engineer" is enough to be sure that we'll never be able to discuss together, and that your code is a total mess for me. Sorry
– gregorySalvan
Dec 28 '17 at 20:37
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f4095319%2funittest-tests-order%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
possible duplicate of changing order of unit tests in Python
– S.Lott
Nov 4 '10 at 10:25
1
Note that the order in which the various test cases will be run is determined by sorting the test function names with respect to the built-in ordering for strings. docs.python.org/library/unittest.html
– morsik
Aug 26 '15 at 13:48