Friday, August 01, 2008

Good tutorial on three popular .Net mock framework.

Here are three popular tutorials on .Net Framework by Stephen Walther.

TypeMock

Moq

Rhino Mock

Very good tutorials on three major frameworks.

Friday, July 25, 2008

.Net mock framework

I have been using NUnit/MbUnit/VSTest to develop my tests for a while, but I don’t use mock frameworks too much. I have been reading some blog posts , but didn’t spend too much time digging into those frameworks too much. To effectively use those mock frameworks such as Rhino.Mocks and moq , you have to write your code in a TDD/BDD friendly way. Unfortunately, most of my projects are not designed this way, so I have to manually setup those stubs.

I started to look into the Asp.Net mvc framework lately and think it’s good opportunity to look into those frameworks because Asp.Net will definitely give those frameworks a push. If you download the Northwind example from here, you will find the example uses the moq framework.

Martin Fowler wrote a very good article on this topic.  I belong to the classic testers in his definitions.

Stephen Walther wrote a very good article here , and he provided a lot of background information regarding the Moq framework. His summary of the previous article is insightful.

In this paper, Fowler makes several distinctions. First, he distinguishes a stub from a mock. According to Fowler -- who uses Meszaros’ definitions here – stubs “provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test.” Mocks, on the other hand, are “objects pre-programmed with expectations which form a specification of the calls they are expected to receive”.

This distinction between stubs and mocks leads Fowler to distinguish between state verification and behavior verification. Stubs are most often used when performing state verification. When performing state verification, you are interested in determining whether a certain condition is true or false at the end of a test. Mocks, on the other hand, are most often used when performing behavior verification. When performing behavior verification, you are interested in how the mock objects interact. For example, you want to know whether a not a certain method was called on one mock object after a method was called on another mock object.

Fowler makes a final distinction between classical TDD and mockist TDD. This final distinction concerns the “philosophy to the way testing and design play together”. A classical TDDer tends to use stubs and state verification. According to Fowler, the “classical TDD style is to use real objects if possible and a double if it's awkward to use the real thing.” A mockist TDDer, on the other hand, almost always uses mocks and behavior verification. A mockist TDDer “will always use a mock for any object with interesting behavior.”

I find it very interesting to read the history of moq from its designer kzu, he created the framework initially because he is not satisfied with the other existing frameworks, however, he has to add a lot of other features into his framework such as VerifyAll. You can find a lot of interesting discussion from the evolving history of moq. Personally, I don’t see moq provides much more than Rhino Mocks. I probably reference both in my project.

Thursday, July 17, 2008

Unicode and ANSI function in CRT

From the book Windows via C/C++

 

Unicode and ANSI Functions in the C Run-Time Library

Like the Windows functions, the C run-time library offers one set of functions to manipulate ANSI characters and strings and another set of functions to manipulate Unicode characters and strings. However, unlike Windows, the ANSI functions do the work; they do not translate the strings to Unicode and then call the Unicode version of the functions internally. And, of course, the Unicode versions do the work themselves too; they do not internally call the ANSI versions.

An example of a C run-time function that returns the length of an ANSI string is strlen, and an example of an equivalent C run-time function that returns the length of a Unicode string is wcslen.

Both of these functions are prototyped in String.h. To write source code that can be compiled for either ANSI or Unicode, you must also include TChar.h, which defines the following macro:

#ifdef _UNICODE
#define _tcslen wcslen
#else
#define _tcslen strlen
#endif


Now, in your code, you should call _tcslen. If _UNICODE is defined, it expands to wcslen; otherwise, it expands to strlen. By default, when you create a new C++ project in Visual Studio, _UNICODE is defined (just like UNICODE is defined). The C run-time library always prefixes identifiers that are not part of the C++ standard with underscores, while the Windows team does not do this. So, in your applications you'll want to make sure that both UNICODE and _UNICODE are defined or that neither is defined. Appendix A, "The Build Environment," will describe the details of the CmnHdr.h header file used by all the code samples of this book to avoid this kind of problem.

Wednesday, July 16, 2008

A concept which confuses me for a while…

How do you organize c++ header file with template. Here is a good explanation from C++ Templates: The Complete Guide.

 

6.1 The Inclusion Model

There are several ways to organize template source code. This section presents the most popular approach as of the time of this writing: the inclusion model.

6.1.1 Linker Errors

Most C and C++ programmers organize their nontemplate code largely as follows:

  • Classes and other types are entirely placed in header files. Typically, this is a file with a .hpp (or .H, .h, .hh, .hxx) filename extension.

  • For global variables and (noninline) functions, only a declaration is put in a header file, and the definition goes into a so-called dot-C file. Typically, this is a file with a .cpp (or .C, .c, .cc, or .hxx) filename extension.

This works well: It makes the needed type definition easily available throughout the program and avoids duplicate definition errors on variables and functions from the linker.

With these conventions in mind, a common error about which beginning template programmers complain is illustrated by the following (erroneous) little program. As usual for "ordinary code," we declare the template in a header file:

// basics/myfirst.hpp 

#ifndef MYFIRST_HPP
#define MYFIRST_HPP

// declaration of template
template <typename T>
void print_typeof (T const&);

#endif // MYFIRST_HPP


print_typeof() is the declaration of a simple auxiliary function that prints some type information. The implementation of the function is placed in a dot-C file:



// basics/myfirst.cpp 

#include <iostream>
#include <typeinfo>
#include "myfirst.hpp"

// implementation/definition of template
template <typename T>
void print_typeof (T const& x)
{
std::cout << typeid(x).name() << std::endl;
}


The example uses the typeid operator to print a string that describes the type of the expression passed to it (see Section 5.6 on page 58).



Finally, we use the template in another dot-C file, into which our template declaration is #included:



// basics/myfirstmain.cpp 

#include "myfirst.hpp"

// use of the template
int main()
{
double ice = 3.0;
print_typeof(ice); // call function template for type double
}


A C++ compiler will most likely accept this program without any problems, but the linker will probably report an error, implying that there is no definition of the function print_typeof().



The reason for this error is that the definition of the function template print_typeof() has not been instantiated. In order for a template to be instantiated, the compiler must know which definition should be instantiated and for what template arguments it should be instantiated. Unfortunately, in the previous example, these two pieces of information are in files that are compiled separately. Therefore, when our compiler sees the call to print_typeof() but has no definition in sight to instantiate this function for double, it just assumes that such a definition is provided elsewhere and creates a reference (for the linker to resolve) to that definition. On the other hand, when the compiler processes the file myfirst.cpp, it has no indication at that point that it must instantiate the template definition it contains for specific arguments.





6.1.2 Templates in Header Files


The common solution to the previous problem is to use the same approach that we would take with macros or with inline functions: We include the definitions of a template in the header file that declares that template. For our example, we can do this by adding



#include "myfirst.cpp" 


at the end of myfirst.hpp or by including myfirst.cpp in every dot-C file that uses the template. A third way, of course, is to do away entirely with myfirst.cpp and rewrite myfirst.hpp so that it contains all template declarations and template definitions:



// basics/myfirst2.hpp 

#ifndef MYFIRST_HPP
#define MYFIRST_HPP

#include <iostream>
#include <typeinfo>

// declaration of template
template <typename T>
void print_typeof (T const&);

// implementation/definition of template
template <typename T>
void print_typeof (T const& x)
{
std::cout << typeid(x).name() << std::endl;
}

#endif // MYFIRST_HPP


This way of organizing templates is called the inclusion model. With this in place, you should find that our program now correctly compiles, links, and executes.



There are a few observations we can make at this point. The most notable is that this approach has considerably increased the cost of including the header file myfirst.hpp. In this example, the cost is not the result of the size of the template definition itself, but the result of the fact that we must also include the headers used by the definition of our template—in this case <iostream> and <typeinfo>. You may find that this amounts to tens of thousands of lines of code because headers like <iostream> contain similar template definitions.



This is a real problem in practice because it considerably increases the time needed by the compiler to compile significant programs. We will therefore examine some possible ways to approach this problem in upcoming sections. However, real-world programs quickly end up taking hours to compile and link (we have been involved in situations in which it literally took days to build a program completely from its source code).



Despite this build-time issue, we do recommend following this inclusion model to organize your templates when possible. We examine two alternatives, but in our opinion their engineering deficiencies are more serious than the build-time issue discussed here. They may have other advantages not directly related to the engineering aspects of software development, however.



Another (more subtle) observation about the inclusion approach is that noninline function templates are distinct from inline functions and macros in an important way: They are not expanded at the call site. Instead, when they are instantiated, they create a new copy of a function. Because this is an automatic process, a compiler could end up creating two copies in two different files, and some linkers could issue errors when they find two distinct definitions for the same function. In theory, this should not be a concern of ours: It is a problem for the C++ compilation system to accommodate. In practice, things work well most of the time, and we don't need to deal with this issue at all. For large projects that create their own library of code, however, problems occasionally show up. A discussion of instantiation schemes in Chapter 10 and a close study of the documentation that came with the C++ translation system (compiler) should help address these problems.



Finally, we need to point out that what applies to the ordinary function template in our example also applies to member functions and static data members of class templates, as well as to member function templates.

Saturday, July 12, 2008

Going back to basic – should 0 or 1 be returned from function..

Here is a very good article about this topic.

The basics concern whether you should return 0 as success or 1 as success. In a main function , normally, you will return 0 as success and other variables as failure codes. However, when you try to map to boolean code, 0 is false, and non-0 is success, and that’s the extra level of confusion. So, if you do want to return boolean state as the function return value, it’s best to use bool, NOT integer.

Sunday, June 22, 2008

Ignore folders in svn

This is copied from the helper file:

To ignore all CVS folders you should either specify a pattern of *CVS or better, the pair CVS */CVS. The first option works, but would also exclude something called ThisIsNotCVS. Using */CVS alone will not work on an immediate child CVS folder, and CVS alone will not work on sub-folders.

Tuesday, May 20, 2008

Web Testing, Load Testing, Unit Testing...

I understand that unit testing should cover most of scenarios if an application is nicely designed with some pro-unit testing framework, namely MVC pattern. In .Net world, those unit testing friendly frameworks include Asp.Net MVC, and Castle Project. But a lot of times, an application has already been designed and put into the production, you don't have time, money, and energy to re-write those anti-unit testing applications based on unit-testing friendly framework.  Mostly importantly, business users don't understand why you want to rewrite an application just because you want to be able to unit testing it, they just need a workable application, and don't care too much about the back end unit testing which is an essential part of the whole project.

I am working on an application which is heavily depend on CSLA.Net framework, which is a very impressive framework. However, the framework is not very TDD friendly, and check out some discussions in this post. I started to look for some other alternatives which can automate the testing process, and the visual studio team system had an edition for software testers starting from Visual studio 2005. But honestly, I won't recommend using it unless you have the VS2008 version. The biggest limitation of VS2005 test version is that it doesn't support Ajax testing natively, and you have to rely on fiddler tool to record the testing. VS 2005 test version cannot detect dynamic parameter very well, and I have big trouble to get those viewstate related variables work. VS 2008 version definitely did a lot of improvements in those areas, although it's not free of bugs either. Check out an excellent post on web test in visual studio 2008 here.

A couple of roadblocks I have in working on the automating tests are:

1. You have to turn page validation off and view state check off. You probably want to turn it back in production environment.

<pages theme="Main" enableEventValidation="false" enableViewStateMac="false" >

2. You may need to hard code a machine key if you want to port the test to different sites. In our cases, we recorded the tests on a test website, but we also want to run it on stage site too.

   1: <machineKey


   2:     validationKey="9A1F64F585E06F97562808D96860AC9E3DA5F231EA34B42E8C98AE02B52EAF33CF7EF24C618F7F391756974090458C9740BE1007E6F898161C39B863A7E46C3D"


   3:     decryptionKey="376909E1031E5AA520AAE33B554ACACCE0CAEA2BA0B142FB3050D814059398CB"


   4:     validation="SHA1" decryption="AES"/>  




3. Some post parameters are promoted as dynamic parameters incorrectly, watch out those if you need manually correct those.



4. Check out this web test plug in if you want to use some .Net functions in your test.



Sometimes, you want to create set up some specific pre-testing condition, and tear down those values added to your database in your testing, you can create a plug in to do this work.



WebTestPlugIn



In our projects, in PreWebTest, we create an object in the database which we need run our test on, and in PostWebTest, we delete this object. It works very well for our situation. We run those tests every day , and those green pass icons definitely make our life much easier.



 



TestResults