Test Strategy
For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!
rajeshkumar created the topic: This is a test 2
This is a test 2
Regards,
Rajesh Kumar
Twitt me @ twitter.com/RajeshKumarIn
pasupuleti2 created the topic: Test Results
Rajesh,
How to get the results with graphics for each action in each build in Jenkins
Ex: checkout code from Git -> 1 min
Build -> 3 mins
unittest -> 15mins (why it is taking 15 mins)
integration test -> 3 mins
Deployment -> 2 mins (2 different locations)
Emails -> 1 mins
Display the test results for each test cases in Unit test and Integration
test also.
There is not any official distinguished between code Coverage and Test Coverage. Some practitioner has expressed their difference opinion in terms of defining Code Coverage and Test Coverage.
Code coverage and test coverage metrics are both measurements that can be seful to assess the quality of your application code. Code coverage is a term to describe which application code is exercised when the application is running.
Whereas Test coverage refers to metrics in an overall test-plan. In this expert response, you’ll learn how quality assurance professionals use both of these metrics effectively.
Another definition found over the google search as below;
Code coverage is a measure of how much code is executed during testing &
Test coverage is a measure of how many test cases have been executed during testing.
Lets know about Code Coverage by definition more in details.
In computer science, code coverage is a measure used to describe the degree to which the source code of a program is tested by a particular test suite. A program with high code coverage has been more thoroughly tested and has a lower chance of containing software bugs than a program with low code coverage. Many different metrics can be used to calculate code coverage; some of the most basic are the percent of program subroutines and the percent of program statements called during execution of the test suite.
There are a number of coverage criteria, the main ones being:
[Taken from Wikipedia]
Simply put, code coverage is a way of ensuring that your tests are actually testing your code. When you run your tests you are presumably checking that you are getting the expected results. Code coverage will tell you how much of your code you exercised by running the test. Your tests may all pass with flying colours, but if you’ve only tested 50% of your code, how much confidence can you have in it?
When writing JUnit tests developers often add log statements that can help provide information on test failures. During the initial attempt to find a failure a simple System.out.println() statement is usually the first resort of most developers.
Replacing these System.out.println() statements with log statements is the first improvement on this technique. Using SLF4J (Simple Logging Facade for Java) provides some neat improvements using parameterized messages. Combining SLF4J with JUnit 4 rule implementations can provide more efficient test class logging techniques.
Some examples will help to illustrate how SLF4J and JUnit 4 rule implementation offers improved test logging techniques. As mentioned the inital solution by developers is to use System.out.println() statements. The simple example code below shows this method.
01 import org.junit.Test;
02
03 public class LoggingTest {
04
05 @Test
06 public void testA() {
07 System.out.println(“testA being run…”);
08 }
09
10 @Test
11 public void testB() {
12 System.out.println(“testB being run…”);
13 }
14 }
The obvious improvement here is to use logging statements rather than the System.out.println() statements. Using SLF4J enables us to do this simply whilst allowing the end user to plug in their desired logging framework at deployment time. Replacing the System.out.println() statements with SLF4J log statements directly results in the following code.
view source
01 import org.junit.Test;
02 import org.slf4j.Logger;
03 import org.slf4j.LoggerFactory;
04
05 public class LoggingTest {
06
07 final Logger logger =
08 LoggerFactory.getLogger(LoggingTest.class);
09
10 @Test
11 public void testA() {
12 logger.info(“testA being run…”);
13 }
14
15 @Test
16 public void testB() {
17 logger.info(“testB being run…”);
18 }
19 }
Looking at the code it feels that the hard coded method name in the log statements would be better obtained using the @Rule TestName JUnit 4 class. This Rule makes the test name available inside method blocks. Replacing the hard coded string value with the TestName rule implementation results in the following updated code.
01 import org.junit.Rule;
02 import org.junit.Test;
03 import org.junit.rules.TestName;
04 import org.slf4j.Logger;
05 import org.slf4j.LoggerFactory;
06
07 public class LoggingTest {
08
09 @Rule public TestName name = new TestName();
10
11 final Logger logger =
12 LoggerFactory.getLogger(LoggingTest.class);
13
14 @Test
15 public void testA() {
16 logger.info(name.getMethodName() + ” being run…”);
17 }
18
19 @Test
20 public void testB() {
21 logger.info(name.getMethodName() + ” being run…”);
22 }
23 }
SLF4J offers an improved method to the log statement in the example above which provides faster logging. Use of parameterized messages enable SLF4J to evaluate whether or not to log the message at all. The message parameters will only be resolved if the message will be logged. According to the SLF4J manual this can provide an improvement of a factor of at least 30, in case of a disabled logging statement.
Updating the code to use SLF4J parameterized messages results in the following code.
01 import org.junit.Rule;
02 import org.junit.Test;
03 import org.junit.rules.TestName;
04 import org.slf4j.Logger;
05 import org.slf4j.LoggerFactory;
06
07 public class LoggingTest {
08
09 @Rule public TestName name = new TestName();
10
11 final Logger logger =
12 LoggerFactory.getLogger(LoggingTest.class);
13
14 @Test
15 public void testA() {
16 logger.info(“{} being run…”, name.getMethodName());
17 }
18
19 @Test
20 public void testB() {
21 logger.info(“{} being run…”, name.getMethodName()); }
22 }
23 }
Quite clearly the logging statements in this code don’t conform to the DRY principle.
Another JUnit 4 Rule implementation enables us to correct this issue. Using the TestWatchman we are able to create an implementation that overrides the starting(FrameworkMethod method) to provide the same functionality whilst maintaining the DRY principle. The TestWatchman Rule also enables developers to override methods invoked when the test finishes, fails or succeeds.
Using the TestWatchman Rule results in the following code.
01 import org.junit.Rule;
02 import org.junit.Test;
03 import org.junit.rules.MethodRule;
04 import org.junit.rules.TestWatchman;
05 import org.junit.runners.model.FrameworkMethod;
06 import org.slf4j.Logger;
07 import org.slf4j.LoggerFactory;
08
09 public class LoggingTest {
10
11 @Rule public MethodRule watchman = new TestWatchman() {
12 public void starting(FrameworkMethod method) {
13 logger.info(“{} being run…”, method.getName());
14 }
15 };
16
17 final Logger logger =
18 LoggerFactory.getLogger(LoggingTest.class);
19
20 @Test
21 public void testA() {
22
23 }
24
25 @Test
26 public void testB() {
27
28 }
29 }
And there you have it. A nice test code logging technique using JUnit 4 rules taking advantage of SLF4J parameterized messages.
I would be interested to hear from anyone using this or similar techniques based on JUnit 4 rules and SLF4J.
Reference: http://www.catosplace.net/