tag:blogger.com,1999:blog-57882223386756950872024-03-14T04:35:11.894+11:00Monkey see, monkey do, occasionally monkey learn.Unknownnoreply@blogger.comBlogger22125tag:blogger.com,1999:blog-5788222338675695087.post-20406730606037911762015-07-19T20:13:00.003+10:002015-07-19T20:13:58.232+10:00Blog has moved....After seeing what I could get from Ghost as a blogging platform, I decided to move my blog to Ghost - blog.many-monkeys.com - I hope you like it.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-4962587012029459922015-04-03T18:06:00.003+11:002015-04-09T13:05:19.592+10:00Using GMock with Visual Studio CppUnitTestFrameworkOne of the things I have been a bit disappointed with myself during the development of <a href="https://github.com/OpenCover/opencover">OpenCover</a> is the lack of unit testing around the C++ code that makes up the profiler. I did toy with <a href="https://code.google.com/p/googletest/">GTest</a> and got some decent tests around the instrumentation engine but I was never able to actually test the profiler callbacks, also I found the lack of GTest integration with Visual Studio quite irritating; I know I have been spoilt by ReSharper. Recently however, during handling <a href="https://msdn.microsoft.com/en-us/library/hh549175.aspx">Fakes</a> through OpenCover, I had an opportunity to work out how to load the profiler using registry free loading and realised that perhaps such testing might be within my reach, what I was missing however was a mocking library and one that I could use with Visual Studio tooling.<br />
<br />
Frankly GMock was the only candidate, the commercial alternatives being out as this was for an OSS project, but the instructions all seemed to want to build a number of libraries (64/32 bit Debug/Release) that I would have to statically link to and maintain these builds should the source or build options change. I decided to try a different tack that wouldn't involve building libraries and it has worked out reasonably successful, so I thought it would be worth commenting on here.<br />
<br />
<h4>
Step 1 </h4>
<div>
Get the latest GMock (1.7.0) library as a zip file and uncompress it somewhere within your repository.<br />
<br /></div>
<div>
<h4>
Step 2</h4>
</div>
<div>
From within Visual Studio update the Additional Include Directories to include the following paths</div>
<div>
<br /></div>
<pre>$(SolutionDir)lib\gmock-1.7.0
$(SolutionDir)lib\gmock-1.7.0\include
$(SolutionDir)lib\gmock-1.7.0\gtest
$(SolutionDir)lib\gmock-1.7.0\gtest\include</pre>
<div>
<h4>
Step 3</h4>
</div>
<div>
Add the following to your "stdafx.h"<br />
<br /></div>
<div>
<pre class="brush:cpp;">#include "gmock/gmock.h"
#include "gtest/gtest.h"</pre>
</div>
<div>
<h4>
Step 4</h4>
</div>
<div>
Add the following to your "stdafx.cpp"<br />
<br />
<pre class="brush:cpp;">// The following lines pull in the real gmock *.cc files.
#include "src/gmock-cardinalities.cc"
#include "src/gmock-internal-utils.cc"
#include "src/gmock-matchers.cc"
#include "src/gmock-spec-builders.cc"
#include "src/gmock.cc"
// The following lines pull in the real gtest *.cc files.
#include "src/gtest.cc"
#include "src/gtest-death-test.cc"
#include "src/gtest-filepath.cc"
#include "src/gtest-port.cc"
#include "src/gtest-printers.cc"
#include "src/gtest-test-part.cc"
#include "src/gtest-typed-test.cc"</pre>
</div>
<h4>
Step 5</h4>
Now all you need to do is add initialise GMock and you are ready; as I am using the CppUnitTestFramework I do the following.<br />
<br />
<pre class="brush:cpp;">TEST_MODULE_INITIALIZE(ModuleInitialize)
{
// enable google mock
::testing::GTEST_FLAG(throw_on_failure) = true;
int argc = 0;
TCHAR **argv = NULL;
::testing::InitGoogleMock(&argc, argv);
}
</pre>
<br />
Now all you need to do is follow the GMock documentation and add some expectations etc you can as I discovered even mock COM objects and have expectations on them e.g.<br />
<br />
<pre class="brush:cpp;">EXPECT_CALL(*profilerInfo, SetEventMask(EVENT_MASK_WHEN_FAKES))
.Times(1)
.WillRepeatedly(Return(S_OK));</pre>
<h4>
Bonus Round
</h4>
<div>
There were a few little niggles however the first of which is that if an expectation fails, the Visual Studio test runner takes a little too long to close down (I suspect this may be something on my machine related to DrWatson). </div>
<div>
<br />
The second was that if an expectation did fail I could only initially see the result using DebugView - ugh - however I found a solution at <a href="http://www.durwella.com/post/96457792632/extending-microsoft-cppunittestframework">http://www.durwella.com/post/96457792632/extending-microsoft-cppunittestframework</a> which involves using some extra macros, which I added to my "stdafx.h" and voila the results are now available in Visual Studio.<br />
<br /></div>
<div>
Finally, I found the mocks were not very lightweight and in fact if I left them hooked in caused performance issues however replacing them with an admittedly less useful stub I could avoid this when necessary.</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-22514793067466555182015-02-22T08:33:00.000+11:002015-02-22T08:33:09.231+11:00Happy Birthday OpenCover<h4>
Happy Birthday</h4>
<br />
Today <a href="https://github.com/OpenCover/opencover">OpenCover</a> is 4 (four) years old, where has the time gone? In that time it has had over 60,000 <a href="http://www.nuget.org/packages/opencover">nuget downloads</a>, been adopted by the SharpDevelop community as the coverage tool for their IDE, and, as I found out the other day, is also being used by the <a href="https://github.com/dotnet/corefx">corefx team</a> to supply <a href="http://dotnet-ci.cloudapp.net/job/dotnet_corefx_coverage_windows/Code_Coverage_Report/">coverage information</a> on their tests.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-wbgRAqFXUas/VOf6400oiCI/AAAAAAAAArU/CwBzv_SkjBc/s1600/2-1.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://3.bp.blogspot.com/-wbgRAqFXUas/VOf6400oiCI/AAAAAAAAArU/CwBzv_SkjBc/s1600/2-1.jpg" height="200" width="200" /></a></div>
Four years ago I started on OpenCover (<a href="https://github.com/OpenCover/opencover/commit/23ecce5026b5f609faad57bae3917d4248749316">first commit</a> - not very interesting but a stake in the ground) in order to create a code coverage tool for the .NET platform that could be used by anyone, but especially so that those of us in the open source community could have a tool available to us to help enhance our testing feedback; in the past we have seen some tools go commercial, some just vanish and others just abandoned. I also wanted to share some of the knowledge I had picked up in this area but no longer used in my day-to-day activities and to ensure it remains within the community by making it maintainable and available without restriction.<br />
<br />
It took nearly 6 months to get the first <a href="http://scubamunki.blogspot.com.au/2011/06/opencover-first-beta-release.html">beta release</a> and since that time we have added sequence and branch coverage, support for .NET 2 and .NET 4+, 32 and 64 bit support, and even Silverlight. Later features such as coverage by test and hooking into services and IIS support; not everything works as seamlessly as I would like but the community has either lived with it or improved it - which was the outcome I was seeking. Just recently we even added support for Microsoft.Fakes because some people wanted to use OpenCover for coverage with their tests that used Fakes rather than the coverage tool that they already had available; that was an interesting learning exercise, as well due to some very fortuitous googling.<br />
<br />
There even seems to be some movement to make a Mono version of OpenCover which was not something I saw coming but is also quite exciting, especially as Visual Studio now has support for Android and iPhone development, we knew about Xamarin/Mono but actual Visual Studio integration? Who 4 years ago would have seen that one coming ...?<br />
<h4>
</h4>
<h4>
Highlights</h4>
<br />
One of the highlights of the past few years was starting at my current place of work (MYOB) and then overhearing a conversation within the devops/build team who were discussing the coverage results of this free coverage tool they had found on github, imaging my delight when I realised it was OpenCover they were discussing and in mostly favourable terms; this was the first place I had seen OpenCover being used and it wasn't introduced by me. I even implemented a <a href="https://github.com/OpenCover/opencover/issues/133">feature</a> in response to their comments.<br />
<br />
Another highlight is seeing that at least two Visual Studio integrations involving OpenCover are currently in play, both of these have been started independently, and though I am currently partly involved with one of them it will be interesting to see how they both progress.<br />
<br />
I'd like to thank everyone who has contributed to OpenCover either through direct contribution, suggestions, free stuff (more please) and just using it. Here's to another 4+ interesting years and I wonder what will happen to OpenCover in that time - suggestions?Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-5788222338675695087.post-62435714025578260112014-10-29T19:34:00.002+11:002014-10-30T07:20:55.053+11:00Microservices... Where to Start?Micro-services are becoming a "thing" now and are probably de-facto when someone begins a new project and are thinking about hosting in the cloud but where do you start when you have a brown field project. Now I don't have any hot answers or amazing insights here all I can do is describe what my first "micro-service" was and how it came into being.<br />
<br />
Over time the application was getting more use and the number of servers involved started to increase; we were using auto-scaling and the number of servers increased in line with usage but wavered between 8 and 24 instances. This quite rightly caused some consternation so we tinkered with number of core settings for each instance and thresholds for triggers to scale up and down but nothing seemed to alter the number of total cores being used. We actually have a hefty bit of logging and we can control the output through logging levels so we decided to change the logging to try and get more diagnostic information and this is when things got interesting. As this is a production system getting hold of this log information was initially problematic and slow so we had already started forwarding all the messages to <a href="https://www.splunkstorm.com/">SplunkStorm</a> using the available API and all was well (for over a year) and we were very impressed with how we could use that information for ad-hoc queries. However when we changed the logging levels the servers started scaling and we started to get database errors; unusual ones involving SQL connection issues rather than SQL query errors. We quickly reverted the changes and decided to try and replicate the problem in our CI/SIT environments.<br />
<br />
What we realized was that it was our own logging that was causing our performance issues and even more awkwardly was also responsible for the SQL connection issues as the logging to SplunkStorm via its API was using up the available TCPIP connections; this was even more pronounced when we changed the logging level. What we needed to do was refactor our logging such that we could get all our data into SplunkStorm (and Splunk as we were also in the process of migrating to SplunkStorm's big brother) with minimum impact to the actual production systems. Thankfully our logging framework used NLog, which we had wrapped in another entity for mocking purposes, so what we decided to do was write a new NLog target that would instead log to a queue (service-bus) and then have another service read messages from that queue and forward them to Splunk and SplunkStorm and thus our first micro-service was born.<br />
<br />
The new NLog target took the log messages, batch pushed them to the queue, then a microservice was written that monitors the queue, pulls messages off in batches, and then pushes them to Splunk and SplunkStorm, also in batches. The initial feasibility spike took 1/2 a day with the the final implementation being ready and pushed into production the following week. Because we were using .NET we could also take advantage of multiple threads so we used thead-pools to limit the number of active Splunk/SplunkStorm messages being sent in parallel. What we found after deployment was that we could scale back our main application servers to 4 instances with only a pair of single core services dealing with the logging aspect, we also noticed that the auto scaling never reaches its old thresholds and the instance count has been stable ever since. Another advantage is that the queue can now be used by other services to push messages to Splunk and can even use the same NLog target in their projects to deal with all the complexities.<br />
<br />
I hope the above shows that your first micro-service does not have to be something elaborate but instead deal with a mundane but quite essential task and the benefits can be quite astounding.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-54882764387697387132014-10-13T08:14:00.001+11:002014-10-13T08:14:22.088+11:00Excluding code from coverage...This may (no guarantees) turn into a series of posts on how to refactor your code for testing using simple examples.<br />
<br />
This particular example came from a request to add an "Exclude Lines from Coverage" feature to <a href="https://github.com/OpenCover/opencover">OpenCover</a>. Now there are many ways this could be achieved, none of which I had any appetite for as they were either too clunky and/or could make OpenCover very slow. I am also not a big fan on excluding anything from code coverage; though OpenCover has several exclude options I just thought that this was one step too far in order to achieve that 100% coverage value as it could too easily abused. Even if I did think the feature was useful it still may not get implemented by myself for several days, weeks or months.<br />
<br />
But sometimes there are other ways to cover your code without a big refactoring and mocking exercise which can act as a deterrent to doing the right thing.<br />
<br />
In this case the user was using EntityFramework and wanted to exclude the code in the catch handlers because they couldn't force EntityFramework to crash on demand - this is quite a common problem in my experience. The user also knew that one approach was to push all that EntityFramework stuff out to another class and could then test their exception handling via mocks but didn't have the time/appetite to go down that path and thus wanted to exclude that code.<br />
<br />
I imagined that the user has code that looked something like this:<br />
<br />
<pre class="brush:csharp;">public void SaveCustomers(ILogger logger)
{
CustomersEntities ctx = CustomersEntities.Context;//)
try
{
// awsome stuff with EntityFramework
ctx.SaveChanges();
}
catch(Exception ex)
{
// do some awesome logging
logger.Write(ex);
throw;
}
}</pre>
<br />
and I could see why this would be hard (but not impossible) to test the exception handling. Now instead of extracting out all the interactions with the EntityFramework so it is possible to throw an exception during testing I suggested the following refactoring:<br />
<br />
<pre class="brush:csharp;">internal void CallWrapper(Action doSomething, ILogger logger)
{
try
{
doSomething();
}
catch(Exception ex)
{
// do some awesome logging
logger.Write(ex);
throw;
}
}</pre>
<br />
which I would then use like this:<br />
<br />
<pre class="brush:csharp;">public void SaveCustomers(ILogger logger)
{
CustomersEntities ctx = CustomersEntities.Context;//)
CallWrapper(() => {
// awsome stuff with EntityFramework
ctx.SaveChanges();
}, logger);
}</pre>
<br />
<br />
My original tests should still continue as before and now I have a new method that I can now test independently.<br />
<br />
I know this isn't the only way to tackle this sort of problem and I'd love to hear about other approaches.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-76950447893105617532014-10-06T13:39:00.000+11:002014-10-13T22:53:13.971+11:00A simple TDD exampleI recently posted a response to <a href="http://stackoverflow.com/a/26152423/189163">StackOverflow wrt TDD and Coverage</a> and I thought it would be worth re-posting the response here. The example is simple but hopefully shows how writing the right tests using TDD gives you a better suite of tests for your code than you would probably write if you wrote the tests after the code (which may have been re-factored as you developed).<br />
<br />
"As the [original] accepted answer has pointed out your actual scenario reduces to collection.Sum() however you will not be able to get away with this every time.<br />
<br />
If we use TDD to develop this (overkill I agree but easy to explain) we would [possibly] do the following (I am also using <a href="http://www.nunit.org/">NUnit</a> in this example out of preference).<br />
<br />
<pre class="brush:csharp;">[Test]
public void Sum_Is_Zero_When_No_Entries()
{
var bomManager = new BomManager();
Assert.AreEqual(0, bomManager.MethodToTest(new Collection<int>()));
}
</pre>
<br />
and then write the following code (note: we write the minimum to meet the current set of tests)<br />
<br />
<pre class="brush:csharp;">public int MethodToTest(Collection<int> collection)
{
var sum = 0;
return sum;
}
</pre>
<br />
We would then write a new test e.g.<br />
<br />
<pre class="brush:csharp;">[Test]
[TestCase(new[] { 0 }, 0)]
public void Sum_Is_Calculated_Correctly_When_Entries_Supplied(int[] data, int expected)
{
var bomManager = new BomManager();
Assert.AreEqual(expected, bomManager.MethodToTest(new Collection<int>(data)));
}
</pre>
<br />
If we ran our tests they would all pass (green) so we need a new test(cases)<br />
<br />
<pre class="brush:csharp;">[TestCase(new[] { 1 }, 1)]
[TestCase(new[] { 1, 2, 3 }, 6)]</pre>
<br />
In order to satisfy those tests I would need to modify my code e.g.<br />
<br />
<pre class="brush:csharp;">public int MethodToTest(Collection<int> collection)
{
var sum = 0;
foreach (var value in collection)
{
sum += value;
}
return sum;
}</pre>
<br />
Now all my tests work and if I run that through <a href="http://www.nuget.org/packages/opencover">OpenCover</a> I get 100% sequence and branch coverage - Hurrah!.... And I did so without using coverage as my control but writing the right tests to support my code.<br />
<br />
BUT there is a 'possible' defect... what if I pass in null? Time for a new test to investigate<br />
<br />
<pre class="brush:csharp;">[Test]
public void Sum_Is_Zero_When_Null_Collection()
{
var bomManager = new BomManager();
Assert.AreEqual(0, bomManager.MethodToTest(null));
}</pre>
<br />
The test fails so we need to update our code e.g.<br />
<br />
<pre class="brush:csharp;">public int MethodToTest(Collection<int> collection)
{
var sum = 0;
if (collection != null)
{
foreach (var value in collection)
{
sum += value;
}
}
return sum;
}</pre>
<br />
Now we have tests that support our code rather than tests that test our code i.e. our tests do not care about how we went about writing our code.<br />
<br />
Now we have a good set of tests so we can now safely refactor our code e.g.<br />
<br />
<pre class="brush:csharp;">public int MethodToTest(IEnumerable<int> collection)
{
return (collection ?? new int[0]).Sum();
}</pre>
<br />
And I did so without affecting any of the existing tests."<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-24145151722640630902014-04-03T17:09:00.003+11:002014-04-03T17:09:31.811+11:00Customsing New Relic installation during Azure deploymentsFor about a year we've been running New Relic to monitor our WebRoles running on the Azure platform. Installing has been quite simple by following the instructions initially found on the <a href="https://docs.newrelic.com/docs/dotnet/">New Relic</a> site and is now available via <a href="http://www.nuget.org/packages/NewRelicWindowsAzure">Nuget</a>; however two things about this process have been irking me.<br />
<br />
First, I wanted to be able to distinguish the CI and Production deployments in the New Relic portal by making them have different names, but the name as it appears in the New relic portal is controlled through a setting in the <span style="font-family: Courier New, Courier, monospace;">web.config</span> and cannot be controlled though the Azure portal.<br />
<br />
Second, I wanted to be able to control the licence key we used for CI (free licence, limited functionality) and Production (expensive licence, full functionality) deployments, however the key is embedded in the <span style="font-family: Courier New, Courier, monospace;">newrelic.cmd</span> and is applied when the New Relic agent is installed; this is not easy to change during/post deployment.<br />
<br />
The initial solution to both these problems involved producing two packages, one for the CI environment(s) and one for the Production environment. Instead of the normal Debug and Release build outputs, a 3rd target, Production, was used and the <span style="font-family: Courier New, Courier, monospace;">web.config</span> was modified during the build process using a <a href="http://msdn.microsoft.com/en-us/library/dd465318(v=vs.100).aspx">transform</a> that changed the name to what was wanted. The licence key issue was resolved by have two <span style="font-family: Courier New, Courier, monospace;">newrelic.cmd</span> items in the project and then packaging the required one with the appropriate build. This was not ideal but it worked in a fashion however the ProdOps guys were keen on having control over the name and licence key used in production.<br />
<br />
<h4>
Changing the Application name</h4>
<div>
New Relic gets the Application name from a setting in the <span style="font-family: Courier New, Courier, monospace;">web.config</span> and so what is necessary is to read a setting in the Azure configuration and update the <span style="font-family: Courier New, Courier, monospace;">web.config</span>. There are many ways to resolve this issue but the approach we took was based on the solution to an <a href="https://github.com/newrelic/nuget-azure-cloud-services/issues/10">identical issue</a> raised on GitHub. </div>
<br />
Form completeness I will however reiterate the steps below:<br />
<br />
<ol>
<li>In the <span style="font-family: Courier New, Courier, monospace;">ServiceDefinition.csdef</span> file add a setting to the <ConfigurationSettings/> section</li>
<br />
<pre class="brush:xml;"><ConfigurationSettings>
<Setting name="NewRelicApplicationName" />
</ConfigurationSettings>
</pre>
<br />
<li>In the ServiceConfiguration file for your environment add a setting that will be used to set the Application name in New Relic</li>
<br />
<pre class="brush:xml;"><ConfigurationSettings>
<Setting name="NewRelicApplicationName" value="MyApplication" />
</ConfigurationSettings>
</pre>
<br />
<li>In the <span style="font-family: Courier New, Courier, monospace;">WebRole.cs</span> file for your application amend your code with the following</li>
<br />
<pre class="brush:csharp;"> public class WebRole : RoleEntryPoint
{
public override bool OnStart()
{
ConfigureNewRelic();
return base.OnStart();
}
private static void ConfigureNewRelic()
{
if (RoleEnvironment.IsAvailable && !RoleEnvironment.IsEmulated)
{
string appName;
try
{
appName = RoleEnvironment.GetConfigurationSettingValue("NewRelicApplicationName");
}
catch (RoleEnvironmentException)
{
/*nothing we can do so just return*/
return;
}
if (string.IsNullOrWhiteSpace(appName))
return;
using (var server = new ServerManager())
{
// get the site's web configuration
const string siteNameFromServiceModel = "Web";
var siteName = string.Format("{0}_{1}", RoleEnvironment.CurrentRoleInstance.Id, siteNameFromServiceModel);
var siteConfig = server.Sites[siteName].GetWebConfiguration();
// get the appSettings section
var appSettings = siteConfig.GetSection("appSettings").GetCollection();
AddConfigElement(appSettings, "NewRelic.AppName", appName);
server.CommitChanges();
}
}
}
private static void AddConfigElement(ConfigurationElementCollection appSettings, string key, string value)
{
if (appSettings.Any(t => t.GetAttributeValue("key").ToString() == key))
{
appSettings.Remove(appSettings.First(t => t.GetAttributeValue("key").ToString() == key));
}
ConfigurationElement addElement = appSettings.CreateElement("add");
addElement["key"] = key;
addElement["value"] = value;
appSettings.Add(addElement);
}
}
</pre>
</ol>
And that should be it<br />
<br />
<h4>
Changing the New Relic licence key</h4>
<div>
The New Relic licence key is applied when the New Relic agent is installed on the host so what we is needed is to read the Azure configuration when the <span style="font-family: Courier New, Courier, monospace;">newrelic.bat</span> is executed as part of the Startup tasks (defined in the <span style="font-family: Courier New, Courier, monospace;">ServiceDefinition.csdef</span>) and apply it when the agent is installed. There does not appear to be way of changing the licence key if your agents have already been installed other than reducing the number of instances to 0 and then scaling back up (I suggest you use the staging slot for this).<br />
<br />
<ol>
<li>In the <span style="font-family: Courier New, Courier, monospace;">ServiceDefinition.csdef</span> file add a setting to the <ConfigurationSettings/> section</li>
<br /><pre class="brush:xml;"><ConfigurationSettings>
<Setting name="NewRelicLicenceKey" />
</ConfigurationSettings>
</pre>
<br />
and add a new Environment variable to the <span style="font-family: Courier New, Courier, monospace;">newrelic.cmd</span> startup task that will be set by the new configuration setting
<br />
<br /><pre class="brush:xml;"><Task commandLine="newrelic.cmd" executionContext="elevated" taskType="simple">
<Environment>
<Variable name="EMULATED">
<RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" />
</Variable>
<Variable name="NewRelicLicence">
<!-- http://msdn.microsoft.com/en-us/library/windowsazure/hh404006.aspx -->
<RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/ConfigurationSettings/ConfigurationSetting[@name='NewRelicLicenceKey']/@value" />
</Variable>
<Variable name="IsWorkerRole" value="false" />
</Environment>
</Task>
</pre>
<br />
<li>In the ServiceConfiguration file for your environment add a setting that will be used to set the Application name in New Relic</li>
<br /><pre class="brush:xml;"><ConfigurationSettings>
<Setting name="NewRelicLicenceKey" value="<ADD YOUR KEY HERE>" />
</ConfigurationSettings></pre>
<br />
<li>Edit your <span style="font-family: Courier New, Courier, monospace;">newrelic.cmd</span> to use the Environment variable</li>
<br /><pre class="brush:plain;">:: Update with your license key
SET LICENSE_KEY=%NewRelicLicenceKey%</pre>
</ol>
</div>
<div>
<br />
Now you should be able to control the New Relic licence key during your deployment.</div>
<div>
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-33414399257150529312013-12-14T16:46:00.000+11:002013-12-14T16:46:41.995+11:00Book Review - Building Mobile Applications Using Kendo UI Mobile and ASP.NET Web APII've written a book review on 'Building Mobile Applications Using Kendo UI Mobile and ASP.NET Web API' and posted it up on <a href="http://www.codeproject.com/Articles/696464/Building-Mobile-Applications-Using-Kendo-UI-Mobile">CodeProject</a>.
<b>Summary</b>
I liked this book and I took a lot form it that I am now using to build that sample application using <a href="http://www.kendoui.com/">KendoUI</a>. If you want to learn about ASP.NET Web API then this book isn't for you and you'll learn a lot more from the <a href="http://www.asp.net/web-api">ASP.NET Web API</a> site.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-5788222338675695087.post-35105685762848240332013-09-15T09:21:00.000+10:002013-09-15T09:35:44.543+10:00Application TracingSo <a href="https://github.com/sawilde/opencover">OpenCover</a> is as feature complete as I care to take it at the moment, I may do this one feature involving <a href="https://github.com/sawilde/opencover/issues/144">Windows Store applications</a> should I have a need for it, and I decided to not continue with <a href="http://scubamunki.blogspot.com.au/2012/02/mutation-testing-use-for-re-jit.html">OpenMutate</a> as I can't really find a need for it other than an exploratory investigation into reJIT.<br />
<br />
I do have one more itch to scratch when it comes to profilers and that is application tracing and this may allow me to play with other technologies which I'll list later. This itch started a few months back, perhaps 6+ months ago, when I was trying to integrate some commercial tracing applications to an application I was working on and they both died horribly and I started to look for alternatives and found nothing available. Now I could have started the project then but I decided to pester both vendors until they fixed the problem which they eventually did (within a week or two of each other or so it seemed to me) and I integrated one of the solutions and moved on... but the itch never went away.<br />
<br />
So what am I thinking... well a profiler (obviously) with 32/64 support (again given) and making obscene abuse of the <a href="http://msdn.microsoft.com/en-us/library/ms231874.aspx">COR_PRF_MONITOR_ENTERLEAVE</a> functionality. The problem here is I don't really know what people will want to track (hey that is why there are companies that do this thing with BAs and such like to decide on this) so in the first instance I'll go at tracing everything (which will probably be very slow) and go from there.<br />
<br />
This leads to the next problem, data, lots of it, lots and lots of it, and that data is going to need a home but a home I can then use to create reports at some point. For this I am thinking asynchronous, initially a queue and potentially an event source like data store like <a href="http://geteventstore.com/">EventStore</a> or <a href="https://github.com/NEventStore/NEventStore">NEventStore</a>. The advantage of an event source would allow the ability to regenerate the views once we know what they are, perhaps something along the lines of <a href="http://www.splunk.com/">Splunk</a> or <a href="https://www.splunkstorm.com/">SplunkStorm</a> would come into play.<br />
<br />
So a name... always the hardest part but thankfully we have the internet and online dictionaries so I've gone with <a href="https://github.com/sawilde/opendiscover">OpenDiscover</a>.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-8693852450963045752013-03-09T11:16:00.000+11:002013-03-09T11:16:01.922+11:00Creating a simple NodeJS app with MongoOkay, I woke up this morning (6am) with a need to create a simple reporting dashboard to display the coverage results from OpenCover when it dog-foods its own tests. Now that OpenCover has no <b><i>reported </i></b>bugs, I decided to use my spare time to investigate other technologies for a while.<br />
<br />
What I needed was simple 'online' storage to capture results from the build system and the ability to extract that data into charts. Normally I'd probably knock up a simple rails app because it is easy to do, however I decided, probably due to the heat, to use the following:<br />
<br />
<ul>
<li><a href="http://nodejs.org/">node.js</a> - a technology I haven't used but have meant to for a while; I also like the JavaScript syntax better than ruby (it's a personal thing)</li>
<li><a href="http://www.mongodb.org/">mongodb</a> - a database I am familiar with</li>
<li><a href="https://developers.google.com/chart/">google charts</a> - free; as in beer.</li>
<li><a href="http://www.heroku.com/">heroku </a>- free; well my intended usage will be.</li>
</ul>
<div>
A quick time-boxed search of the web about how to use node with mongodb and create a RESTful API and I settled on the following packages:</div>
<div>
<ul>
<li><a href="http://mongoosejs.com/">mongoose </a>- for interacting with the mongo database</li>
<li><a href="http://mcavage.github.com/node-restify/">restify </a>- for creating a simple rest server</li>
<li><a href="https://github.com/remy/nodemon">nodemon </a>- monitors changes in your app and restarts; sounds useful</li>
</ul>
</div>
<div>
I'll assume other packages will be added to the mix as challenges present themselves. It's now 7am and time for breakfast and then the fun starts... </div>
<div>
<br /></div>
<div>
And a few hours later we have a simple storage system hosted on heroku all we need now is the charts</div>
<div>
<br /></div>
<div>
The repository can be found on <a href="https://github.com/sawilde/metrics-store">github</a>. </div>
<div>
<br /></div>
<div>
I am sure it will evolve over time but it was very simple to get to this stage by leveraging the work of all those who have gone before.</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-1041178401955350122012-08-25T14:39:00.001+10:002013-03-09T11:16:29.583+11:00MongoDB, Mongoid, MapReduce and Embedded Documents.I am using <a href="http://mongoid.org/en/mongoid/index.html">Mongoid </a>to store some data as documents in a <a href="http://www.mongodb.org/">MongoDB </a>database and then run some <a href="http://en.wikipedia.org/wiki/MapReduce">MapReduce </a>queries against the data. Now I have no trouble with mapping data from normal documents and an embedded document but I could not extract data from an embedded collection of documents i.e.<br />
<br />
<pre class="brush: ruby">class Foo
include Mongoid::Document
#fields
field :custom_id, :type => String
#relations
embeds_many :bars
end</pre>
<pre class="brush: ruby">class Bar
include Mongoid::Document
#fields
field :custom_field, :type => String
#relations
embedded_in :Foo
end</pre>
<br />
First it looks like that we need to run the <b>map</b> part of the MapReduce against the parent document and not the child i.e. <b>Foo.map_reduce(...)</b> will work find documents but <b>Bar.map_reduce(...)</b> does not, however that is not surprising as it is also not possible to count all <b>Bar</b> documents by doing <b>Bar.all.count</b> in the rails console.<br />
<br />
Now a MapReduce query in MongoDB is done as a pair of JavaScript scripts, the first does the map by <i>emit</i>ting a mini-document of data and the second that aggregates the data in some manner. So thinking I had a collection (array) my first attempt to map data from the embedded document was this:<br />
<br />
MAP:
<br />
<pre class="brush: js">function() {
if (this.bars == null) return;
for (var bar in this.bars){
emit(bar.custom_field, { count: 1 });
}
}
</pre>
<br />
REDUCE:
<br />
<pre class="brush: js">function(key, values) {
var total = 0;
for ( var i=0; i< values.length; i++ ) {
total += values[i].count;
}
return { count: total };
}</pre>
<br />
This produced an unusual result such that there was only a single aggregated document with a null key and the count was the total number of child documents (summed across all the parents).
<br />
<br />
Now I could have just broken the child document out and not embedded it but I didn't want to break the model over something so trivial that must, in my eyes, be possible.<br />
<br />
After much googling and reading of forum posts, I couldn't find any samples. I eventually observed of some 'unusual' syntax on an unrelated topic which led me to rewrite the <b>map</b> script into this:<br />
<br />
<pre class="brush: js">function() {
if (this.bars== null) return;
for (var bar in this.bars){
emit(this.bars[bar].custom_field, { count: 1 });
}
}
</pre>
<br />
Which produced the expected results. Okay this was probably obvious to anyone who knows MongoDB+MapReduce well but it took me a while to find out and it still isn't that intuitive, though I think I now know why it is this way, so I thought I'd write it up as a bit of a reference.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-53079623034788464482012-06-08T09:19:00.002+10:002012-06-08T09:19:53.729+10:00The "Pigs" and "Chickens" fableI think anyone who is anyone who has heard of Agile and Scrum have heard of the Pigs and Chickens story and how it describes those who are committed to the delivery of the project, "Pigs", and those who are just involved, "Chickens." If not click on the image and learn more about it.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.implementingscrum.com/2006/09/11/the-classic-story-of-the-pig-and-chicken/"><img alt="Implementing Scrum - Pigs and Chickens" border="0" height="112" src="http://www.implementingscrum.com/images/060911-scrumtoon.jpg" title="Implementing Scrum - Pigs and Chickens" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
However I was just recently re-reading "Death March" by Edward Yourdon (1st Edition) and I came across this response to the parable, in the context of commitment whilst on a death march.<br />
<blockquote class="tr_bq">
“I’m not sure you will find any old pigs in development perhaps more chickens. I think that kind of commitment continues until (inevitably?) you get into the first death march project – then there is a rude awakening. Either the pig realises what’s happening, this is the slaughterhouse! RUN!! Or the pig is making bacon…” - Paul Mason (Death March).</blockquote>
I just found it quite amusing and thought I should share...<br />
<br />
<br />
<br />
<br />Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-5788222338675695087.post-50516975899906409462012-02-03T13:56:00.000+11:002012-02-03T13:56:37.325+11:00Mutation Testing; a use for re-JIT?<b>Where to start...</b><br />
Mutation testing is <a href="http://en.wikipedia.org/wiki/Mutation_testing">described</a> as modifying a program in small amounts and then executing the original 'passing' tests that exercise that code and then watching them fail. It is a way of making sure your tests are actually testing what you believe they are testing.<br />
<br />
<b>Setting the stage...</b><br />
So how can we do this with .NET? Well first we need to know what tests execute what code and we can use <a href="https://github.com/sawilde/opencover">OpenCover</a> for that when it is using it's tracking by test feature. With that feature we can see which tests execute which sequence points and also see what branches were exercised, it is this later information we can take advantage of when creating a mutation testing utility.<br />
<br />
<b>New toys to play with...</b><br />
Now this mutation tester is going to be working at the IL level and as such we could use the JIT (Just-in-time) compilation feature that is used with <a href="https://github.com/sawilde/opencover">OpenCover</a> (and <a href="https://github.com/sawilde/partcover.net4">PartCover</a>). However that would mean a complicated instrumentation that we would then have to control which path we would want to exercise, or we could have simpler instrumentation but that would require the process under test (e.g. nunit, mstest, ...) to be stop and started each time to allow new code to be exercised. With .NET 4.5 (in preview at the time of writing) there is a re-JIT compilation feature that we could use instead and this would allow us to use simple instrumentation without needing to stop and start the process under test. There are a number of <a href="http://blogs.msdn.com/b/davbr/archive/2011/10/10/rejit-limitations-in-net-4-5.aspx">limitations</a> of re-JIT but after reviewing them (several times) I don't think any are actually show stoppers.<br />
<br />
However to make the Re-JIT useful we need a way of executing a test or tests repeatedly without having to restart the application under test and this isn't possible with nunit and mstest. However it should be possible to use the test runners from <a href="http://github.com/continuoustests/AutoTest.Net">AutoTest.Net</a> if we host them directly or in a separate process that can be communicated with.<br />
<br />
<b>A plan...</b><br />
So the flow will be something like this (I wonder how will this will stand up to the test of time) I haven't looked at the latest profiler API in-depth but documentation on <a href="http://msdn.microsoft.com/en-us/library/hh362351(v=vs.110).aspx">MSDN</a>) and <a href="http://blogs.msdn.com/b/davbr/archive/2011/10/12/rejit-a-how-to-guide.aspx">David Broman's</a> Blog seem to indicate this should be possible.<br />
<br />
<ul><li>Run OpenCover to produce an XML file with a list of what tests exercised what branches</li>
<li>For each branch point =></li>
<ul><li>if first branch of method then store the original IL (as we will need repeated access to this IL)</li>
<li>(re)instrument the method that contains that branch point and using the original IL of that method invert the logic of only that point</li>
<li>execute each test that exercises that branch point => record pass, fail</li>
<li>if last branch of method then revert method to original IL</li>
</ul></ul><div><b>All it needs is a name...</b><br />
All of this will be hosted on GitHub under <a href="https://github.com/sawilde/openmutate">OpenMutate</a>. Let the games begin....</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-89401805178787437462012-01-21T07:51:00.001+11:002012-08-25T14:50:24.267+10:00Unusual coverage in VB.NETRecently a user posted on <a href="http://stackoverflow.com/questions/8926063/code-coverage-why-is-end-marker-red-end-if-end-try">StackOverflow</a> on why he was seeing unusual coverage results in VB.NET with MSTEST and Visual Studio. The the question already had answers that helped the questioner but I decided to delve a little deeper and find out why the solution proposed worked.<br />
<br />
The issue was that in his code sample the <b>End Try</b> was not being shown as covered even though he had exercised the Try and the Catch parts of his code.<br />
<br />
First I broke his sample down into something simpler and I have highlighted the offending line.<br />
<br />
<pre class="brush: vb"> 07 Function Method() As String
08 Try
09 Return ""
10 Catch ex As Exception
11 Return ""
12 <b>End Try</b>
13 End Function
</pre><br />
In debug we can extract the following sequence points (I am, obviously, using <a href="https://github.com/sawilde/opencover">OpenCover</a> for this.)<br />
<br />
<pre class="brush: xml"><SequencePoints>
<SequencePoint offset="0" ordinal="0" uspid="261" vc="0" ec="32" el="7" sc="5" sl="7"/>
<SequencePoint offset="1" ordinal="1" uspid="262" vc="0" ec="12" el="8" sc="9" sl="8"/>
<SequencePoint offset="2" ordinal="2" uspid="263" vc="0" ec="22" el="9" sc="13" sl="9"/>
<SequencePoint offset="19" ordinal="3" uspid="264" vc="0" ec="30" el="10" sc="9" sl="10"/>
<SequencePoint offset="20" ordinal="4" uspid="265" vc="0" ec="22" el="11" sc="13" sl="11"/>
<SequencePoint offset="40" ordinal="5" uspid="266" vc="0" ec="16" el="12" sc="9" sl="12"/>
<SequencePoint offset="41" ordinal="6" uspid="267" vc="0" ec="17" el="13" sc="5" sl="13"/>
</SequencePoints>
</pre>(where sl = start line, el = end line, sc = start column, ec = end column and offset = IL offset in decimal)<br />
<br />
However these only make sense when you look at the IL...<br />
<br />
<pre>.method public static
string Method () cil managed
{
// Method begins at RVA 0x272c
// Code size 43 (0x2b)
.maxstack 2
.locals init (
[0] string Method,
[1] class [mscorlib]System.Exception ex
)
IL_0000: nop
IL_0001: nop
.try
{
IL_0002: ldstr ""
IL_0007: stloc.0
IL_0008: leave.s IL_0029
IL_000a: leave.s IL_0028
} // end .try
catch [mscorlib]System.Exception
{
IL_000c: dup
IL_000d: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::SetProjectError(class [mscorlib]System.Exception)
IL_0012: stloc.1
IL_0013: nop
IL_0014: ldstr ""
IL_0019: stloc.0
IL_001a: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::ClearProjectError()
IL_001f: leave.s IL_0029
IL_0021: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::ClearProjectError()
IL_0026: leave.s IL_0028
} // end handler
IL_0028: nop
IL_0029: ldloc.0
IL_002a: ret
} // end of method Module1::Method
</pre><br />
Now as you can see the End Try line that is causing concern would only be marked as hit (assuming they are using similar instrumentation to OpenCover) if the code reached IL instruction at offset 40 (IL_0028) however when one looks at the IL produced it is not possible to see how you would ever reach that instruction due to the odd IL produced (<a href="http://en.wikipedia.org/wiki/List_of_CIL_instructions"><b>leave.s</b></a> is a small jump like instruction that is used to exit try/catch/finally blocks) and if you follow the code you see that you will always reach a <b>leave.s</b> that jumps to IL_0029 first.<br />
<br />
In release the IL changes to something more like what I was expecting beforehand and it has no unusual extra IL...<br />
<br />
<pre >.method public static
string Method () cil managed
{
// Method begins at RVA 0x2274
// Code size 30 (0x1e)
.maxstack 2
.locals init (
[0] string Method,
[1] class [mscorlib]System.Exception ex
)
.try
{
IL_0000: ldstr ""
IL_0005: stloc.0
IL_0006: leave.s IL_001c
} // end .try
catch [mscorlib]System.Exception
{
IL_0008: dup
IL_0009: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::SetProjectError(class [mscorlib]System.Exception)
IL_000e: stloc.1
IL_000f: ldstr ""
IL_0014: stloc.0
IL_0015: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::ClearProjectError()
IL_001a: leave.s IL_001c
} // end handler
IL_001c: ldloc.0
IL_001d: ret
} // end of method Module1::Method
</pre><br />
but so do the sequence points...<br />
<br />
<pre class="brush: xml"><SequencePoints>
<SequencePoint offset="0" ordinal="0" uspid="33" vc="0" ec="22" el="9" sc="13" sl="9"/>
<SequencePoint offset="15" ordinal="1" uspid="34" vc="0" ec="22" el="11" sc="13" sl="11"/>
<SequencePoint offset="28" ordinal="2" uspid="35" vc="0" ec="17" el="13" sc="5" sl="13"/>
</SequencePoints>
</pre><br />
So now one will never see your try/catch lines marked covered, so this is not helpful.<br />
<br />
So lets try changing your code as suggested and go back to debug (because that is where you will be running coverage from usually.)<br />
<br />
<pre class="brush: vbnet">15 Function Method2() As String
16 Dim x As String
17 Try
18 x = ""
19 Catch ex As Exception
20 x = ""
21 End Try
22 Return x
23 End Function
</pre><br />
Again we look at the sequence points...<br />
<br />
<pre class="brush: xml"><SequencePoints>
<SequencePoint offset="0" ordinal="0" uspid="268" vc="0" ec="33" el="15" sc="5" sl="15"/>
<SequencePoint offset="1" ordinal="1" uspid="269" vc="0" ec="12" el="17" sc="9" sl="17"/>
<SequencePoint offset="2" ordinal="2" uspid="270" vc="0" ec="19" el="18" sc="13" sl="18"/>
<SequencePoint offset="17" ordinal="3" uspid="271" vc="0" ec="30" el="19" sc="9" sl="19"/>
<SequencePoint offset="18" ordinal="4" uspid="272" vc="0" ec="19" el="20" sc="13" sl="20"/>
<SequencePoint offset="31" ordinal="5" uspid="273" vc="0" ec="16" el="21" sc="9" sl="21"/>
<SequencePoint offset="32" ordinal="6" uspid="274" vc="0" ec="17" el="22" sc="9" sl="22"/>
<SequencePoint offset="36" ordinal="7" uspid="275" vc="0" ec="17" el="23" sc="5" sl="23"/>
</SequencePoints>
</pre><br />
and the IL...<br />
<br />
<pre >.method public static
string Method2 () cil managed
{
// Method begins at RVA 0x282c
// Code size 38 (0x26)
.maxstack 2
.locals init (
[0] string Method2,
[1] string x,
[2] class [mscorlib]System.Exception ex
)
IL_0000: nop
IL_0001: nop
.try
{
IL_0002: ldstr ""
IL_0007: stloc.1
IL_0008: leave.s IL_001f
} // end .try
catch [mscorlib]System.Exception
{
IL_000a: dup
IL_000b: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::SetProjectError(class [mscorlib]System.Exception)
IL_0010: stloc.2
IL_0011: nop
IL_0012: ldstr ""
IL_0017: stloc.1
IL_0018: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::ClearProjectError()
IL_001d: leave.s IL_001f
} // end handler
IL_001f: nop
IL_0020: ldloc.1
IL_0021: stloc.0
IL_0022: br.s IL_0024
IL_0024: ldloc.0
IL_0025: ret
} // end of method Module1::Method2
</pre><br />
So for the <b>End Try</b> to be covered we need line 21 to be hit and that is offset 31 (IL_001F) and as it can be seen both <b>leave.s</b> instructions jump to that point so now that line will be marked as covered.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-10368558459687440182011-10-02T18:29:00.000+11:002011-10-02T18:29:49.253+11:00Adding OpenCover to TeamCityAdding OpenCover to the latest version of <a href="http://www.jetbrains.com/teamcity/">TeamCity</a> (6.5) couldn't be easier however if you need help follow these simple steps.<br />
<br />
1) <a href="https://github.com/sawilde/opencover/downloads">Download</a> and install OpenCover<br />
2) <a href="http://reportgenerator.codeplex.com/">Download</a> and install ReportGenerator (actually unzip)<br />
3) Register the OpenCover profiler DLLs using the regsvr32 utility<br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"><br />
regsvr32 /s x86\OpenCover.Profiler.dll</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">regsvr32 /s x64\OpenCover.Profiler.dll</span><br />
<br />
<br />
4) Using TeamCity add a new Build Step to your configuration<br />
5) Choose C<b>ommand Line</b> as the runner type then choose <b>Custom Script</b> for the Run option.<br />
6) Now all is needed is to set up the command to run the profiler against your tests e.g. for OpenCover the working directory is set to<b> main\bin\debug</b> and so we have<br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"><br />
"%env.ProgramFiles(x86)%\opencover\opencover.console.exe" "-target:..\..\..\tools\NUnit-2.5.10.11092\bin\net-2.0\nunit-console-x86.exe" -targetargs:"OpenCover.Test.dll /noshadow" -filter:"+[Open*]* -[OpenCover.T*]*" "-output:..\..\..\opencovertests.xml"</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">"%env.ProgramFiles(x86)%\ReportGenerator\bin\ReportGenerator.exe" ..\..\..\opencovertests.xml ..\..\..\coverage</span><br />
<br />
<div><br />
7) Finally setup the artifacts so that you can view the results in TeamCity e.g.</div><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"><br />
%teamcity.build.workingDir%\opencovertests.xml</span><br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;">%teamcity.build.workingDir%\coverage\**\*.*</span><br />
<br />
<span class="Apple-style-span" style="font-family: 'Courier New', Courier, monospace;"><br />
</span>And there you have it, OpenCover running under TeamCity and visual reports provided by ReportGenerator. I am sure you will find ways to improve upon this for your own builds.Unknownnoreply@blogger.com10tag:blogger.com,1999:blog-5788222338675695087.post-34992139425837828292011-08-28T00:21:00.001+10:002011-08-28T00:21:49.136+10:00The problem with sequence coverage. (part 2)Previously I mentioned why just relying on sequence coverage is not a good idea as it is possible to have 100% sequence coverage but not 100% code coverage. However I only described a scenario that used a branch that had 2 paths i.e. the most common form of the conditional branches, but there is one other member of the conditional branch family that exists in IL and that is the switch instruction; this instruction can have many paths. This time I am using the code from the <a href="http://json.codeplex.com/">Newtonsoft.Json</a> library because a) it has tests and b) it is very well covered at 83% sequence coverage, but only 72% (by my calculations) branch coverage. The subject of this investigation is BsonReader::ReadType(BsonType) this method has a very large switch statement, that actually is defined as a switch statement in IL, with a default and several <a href="http://en.wikipedia.org/wiki/Switch_statement">fall-throughs</a>; a fall-through is where two or more case statements call the same code. The method itself has 98% sequence coverage and 82% branch coverage; the only code that is uncovered is the handler for the <b>default:</b> path.<br />
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-QsdFobuhcg4/Tlj0HYunyCI/AAAAAAAAADs/-tK1H64RvlU/s1600/seq_110827.png" imageanchor="1"><img border="0" src="http://3.bp.blogspot.com/-QsdFobuhcg4/Tlj0HYunyCI/AAAAAAAAADs/-tK1H64RvlU/s1600/seq_110827.png" /></a></div><div class="" style="clear: both; text-align: left;">which is not unexpected as it is a handler for an Enum which should not be set to any value that is not part of the allowed values. Looking at the branch coverage report we have the following results (the switch instruction we are interested in is at IL offset 8.</div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-mYi7tyVADKU/Tlj3GLYs0cI/AAAAAAAAADw/esP92J61Ajs/s1600/cover_110827.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="323" src="http://4.bp.blogspot.com/-mYi7tyVADKU/Tlj3GLYs0cI/AAAAAAAAADw/esP92J61Ajs/s400/cover_110827.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: left;">Now the first path (0) is unvisited, but we knew that, so the next unvisited branch is #14 and the next is #17; luckily for us the enum in question that is used by the switch instruction is well defined.</div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-k5aJaYKH894/Tlj5ljvo6kI/AAAAAAAAAD0/tafF2OJRdJM/s1600/json_type_110827.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://3.bp.blogspot.com/-k5aJaYKH894/Tlj5ljvo6kI/AAAAAAAAAD0/tafF2OJRdJM/s400/json_type_110827.png" width="225" /></a></div><div class="separator" style="clear: both; text-align: left;">And as such we can thus deduce that the method is never called during testing with the values Symbol and TimeStamp but the code that they would call is covered; in fact we can see from the code that both these enum values are part of the switch/case statement and are part of fall-throughs. So again we see how branch coverage helps identify 'potential' issues and test candidates.</div><br />
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-29569034709507350412011-08-26T00:23:00.001+10:002011-08-26T08:51:45.746+10:00The problem with sequence coverage.Sequence coverage is probably the simplest coverage metric, the information is packaged in PDB files and can be read using tools like <a href="http://www.mono-project.com/Cecil">Mono.Cecil</a>, but just because a method has 100% sequence coverage does not mean you have 100% code coverage.<br />
<div><br />
</div><div>I'll use an example from OpenCover's own dogfood tests to demonstrate what I mean. Here is a method which shows that it has 100% coverage (sequence point that is).</div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-POwpDbQYTCI/TlZWPdymQOI/AAAAAAAAADk/m1gOKnDlCyI/s1600/seq_110825.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="133" src="http://3.bp.blogspot.com/-POwpDbQYTCI/TlZWPdymQOI/AAAAAAAAADk/m1gOKnDlCyI/s640/seq_110825.png" width="500" /></a></div><br />
However I see an issue and that is that on line 101 there is a condition, i.e. a branch, and yet if the visit count is 1 then there is no possibility that both paths for that branch could have been tested. We can therefore infer that even if we had 10000 visits there is no guarantee that every path would be covered even in such a simple method.<br />
<div><br />
</div><div>Looking at the OpenCover results from which the coverage report was generated. We get<br />
<br />
</div><div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-Iwj3DleBX3M/TlZWO-Ax0oI/AAAAAAAAADg/ImVGZJcgGY4/s1600/cover_110825.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="232" src="http://3.bp.blogspot.com/-Iwj3DleBX3M/TlZWO-Ax0oI/AAAAAAAAADg/ImVGZJcgGY4/s640/cover_110825.png" width="500" /></a></div><br />
<div>We can see that each sequence point has been visited once, however the branch coverage shows that only one of the paths for the condition we have identified had been visited (in this case it is the true path); which is good as that is what we deduced.</div><br />
<div>So if you are using code coverage tools do NOT just rely on sequence point coverage alone to determine how well covered your code is. Luckily <a href="https://github.com/sawilde/opencover">OpenCover</a> now, as of 25th Aug 2011, supports branch coverage and <a href="http://reportgenerator.codeplex.com/">ReportGenerator 1.2</a> displays most of the information to help you identify possible coverage mismatches.</div><br />
</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-51297654078951228892011-08-10T20:01:00.003+10:002011-08-10T21:45:06.293+10:00OpenCover Performance Impact (part 2)I think I now have a handle on why I was getting the results I earlier reported i.e. <a href="https://github.com/sawilde/opencover">OpenCover</a> and <a href="https://github.com/sawilde/partcover.net4">PartCover</a> were not some magical performance boosters that added Go Faster stripes to your code.<div>
<br /></div><div>After a heads up by <a href="http://twitter.com/#!/leppie">leppie</a> and his investigations of using OpenCover on his <a href="https://github.com/leppie/IronScheme">IronScheme</a> project I realised that I needed to spend some time on optimizing how I get data from the profiler and aggregate it into the report. In case you are wondering IronScheme test that took just shy of 1 minute to run on my machine took over 60mins when running under the profiler, Ouch!</div><div>
<br /></div><div><b>The problem</b></div><div>
<br /></div><div>First of all I should explain what sort of data OpenCover gathers (and why) and then I can describe what I did to improve performance. OpenCover records each visit to a sequence point and stores these visits into shared memory; I did it this way as I am hoping to be able to use the order of visits for some form of path coverage analysis at a later date. After 8000 visits it informs the host process that there is a block ready for processing. The host takes this block, makes a copy, releases the shared memory back to the profiler and then processes the data. After processing the data the hosts then waits for the next message. It was this latter stage that was the bottleneck as the host was spending too much time aggregating the data that the profiler was already ready with the next 8000 points. </div><div>
<br /></div><div><b>An (interim) solution</b></div><div><b>
<br /></b></div><div>I say interim solution as I am not finished with performance improvements yet but decided that what I had implemented so far was okay for release. </div><div>
<br /></div><div>First I looked at how the results were being aggregated and noticed that a lot of the time was being spent looking up the sequence points so that the visit count could be updated, I switched this to a list and mapped the visit count data to the model at the end of the profiling run. This helped but only by bringing the profiling run down to ~40mins.</div><div>
<br /></div><div>I realised that I just had to get the data out of the way quickly and process it at a later date, so I added a processing thread and a ConcurrentQueue. This was an interesting turn of events as the target process now finished in 4 mins but the host took nearly 40 mins to process the data and the memory usage went up to 2.5GB and a backlog of 40K messages. Hmmm....</div><div>
<br /></div><div>After some toying, whilst looking for inspiration, I noticed that the marshaling of the structure (2 integers) was where most of the time was spent. I switched this to using BitConvertor, which also meant that I could avoid the memory pinning required by the marshaling. Now the target process still ran in just under 4 mins but the backlog very rarely reached 20 messages and memory usage stayed at a comfortable level (<100MB) and the results were ready virtually as soon as the target process had closed. I also upped the number of visit points per packet to 16000, but this didn't show any noticeable improvement in performance. </div><div>
<br /></div><div>I decided this was enough for now and released a version of the profiler.</div><div>
<br /></div><div><b>But what about the earlier results?</b> </div><div>
<br /></div><div>Those earlier results though were still are cause for thought. Why should the OpenCover dogfood tests be faster but the ironscheme test be so much slower. Well the IronScheme tests were doing a lot of loops and were running parts of the code many 1000's of times whereas the dogfood tests were unit tests and the code was only being run several times before moving onto the next test fixture and next section of code. I am now thinking that the issue is due to the optimization that is normally performed by the JIT compiler, but is turned off by the profiler i.e. when running the tests (without profiler) the JIT compiler spends time optimizing the code but the time spent is not recovered as the code is not run enough times to get a net gain, compared to when the JIT compiler just compiles the non-optimised modified code that the profiler produces. </div><div>
<br /></div><div>So in conclusion you may see some speed improvements if running tests where your code is only visited a few times but if you are doing intensive execution of code then don't be surprised if the performance is degraded.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-66447754877542587052011-07-24T19:59:00.003+10:002011-07-24T21:52:52.886+10:00OpenCover Performance ImpactSo how does OpenCover's profiling impact your testing. The best way is to get some figures so that you can judge for yourself. <div><br /></div><div>I decided to use OpenCover's own tests and use the timing value produced by Nunit itself; just like I'd expect any user who is trying to determine impact I suppose. I've also added the results from PartCover for comparison. Before I took any numbers I (warmed) the code by running the code several times beforehand.<br /><br /><table><tbody><tr><td></td><td>Nunit32</td><td>Nunit32 (OpenCover)</td><td>Nunit32 (PartCover)</td><td>Nunit64</td><td>Nunit64 (OpenCover)</td></tr><tr><td></td><td>2.643</td><td>2.691</td><td>2.639</td><td>4.544</td><td>3.807</td></tr><tr><td></td><td>2.629</td><td>2.69</td><td>2.611</td><td>4.426</td><td>3.753</td></tr><tr><td></td><td>2.642</td><td>2.638</td><td>2.612</td><td>4.46</td><td>4.036</td></tr><tr><td>Average</td><td>2.638</td><td>2.673</td><td>2.621</td><td>4.477</td><td>3.865</td></tr></tbody></table><br />I don't know how to interpret these results as they don't make much sense, OpenCover seemed to add on average 1.3% to the total time (which I'd expect), whereas PartCover appears to make the code go faster by 0.64%. I can't explain why the results for 64 bit seem to show that OpenCover improves performance by 13.6%. </div><div><br /></div><div>I tried to come up with a number of reasons for the above but the results I keep getting are reasonably consistent, so I decided to post them anyway and perhaps someone else will be able to tell me what is happening. </div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-60726496102519444062011-07-09T13:15:00.002+10:002011-07-09T14:00:34.995+10:00Questions about open source and liability in the workplace<div>Last weekend I attended DDDSydney and one of the most interesting sessions was a panel session about Microsoft and Opensource (Open Source & Microsoft Ecosystem); though as these things go, it went quickly off(ish) topic as expected by the panelists whom I'll refer to as the crazy drupal girl and the 3 stooges (honestly no offence folks, it was highly entertaining).</div><div><br /></div><div>However it got me thinking about the number of projects where I come have across an unusual bit of open source software that has some use (but has not found a niche or has since been surpassed) and I find that this was introduced by a developer as it was their pet open source project. Now the first question is "what is the liability under this scenario?"</div><div><br /></div><div>Did the developer ask first as they should before using any open source software on a project? If so then the company accepted the situation but what happens if they did not (or what not made aware) are they still liable or is the developer liable? I assume it would be the company as they should be having some sort of oversight but for small overworked teams where process may not be as strong this may get overlooked.</div><div><br /></div><div>The other issue is what happens if you introduce your pet open source software project and then you leave, who supports it? How do you separate the open source project needs and the day-job, when they are so intermingled? Does the remaining team support it, do they have the skills? What happens if the parting was acrimonious in nature then they, the team, raised a legitimate issue would you fix it, or leave them to stew?</div><div><br /></div><div>I don't have answers to the above, I did title this "Questions about...", that can be applied universally the answer to most I suppose is "it depends". Each situation will be different I suspect but I think these type of questions should be asked by any company hoping to use open source software and developers wishing to introduce it, whether that are contributors or not.</div><div><br /></div><div>Personally I have decided to NOT introduce the open source software I develop into my workplace, yes they could use it and find it useful but they can also afford commercial alternatives. If someone else suggested it, I'd have to make sure there was an agreement should an issue arise that affects them, that if they want it fixed quick then I may have to use 'work' time i.e. no guarantees that it would be done that evening or even that week; after all it is supposed to be fun and not stressful.</div><div><br /></div>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-5788222338675695087.post-70278345733456986732011-06-25T18:45:00.000+10:002011-06-25T19:51:13.568+10:00How do we get Users out of [open source] Welfare?Okay an odd title but something I've been thinking about for some time and I suppose is the source of much frustration I have been having whilst maintaining <a href="https://github.com/sawilde/partcover.net4">PartCover</a>; I am hoping to reverse the situation with <a href="https://github.com/sawilde/opencover">OpenCover</a>. <div><br /></div><div><b>Categorizing open source users</b></div><div><br /></div><div>First I'd like to explain that I like to roughly categorize people involved in open source like thus:<div><br /></div><div><div><i>Contributors </i>- these are the guys and gals at the pit-face, developing software, writing documentation and generally striving to make an open source product better.</div></div><div><br /></div><div><div><i>Investors </i>- these individuals use open source software and help to make the product better via feedback and raising, and following up, issues (probably as it is in their interest to do so).</div></div><div><br /></div><div><i>Benefactors </i>- usually companies that give tools to open source developers or sponsor a project in other ways i.e. free licenses or free hosting e.g. NDepend, JetBrains and GitHub.</div><div><br /></div><div><div><i>Angels </i>- these people provide invaluable advice in just managing a open source project and may not be actively involved in the development itself but just keep you sane.</div></div><div><br /></div><div><i>Community </i>- Our main user base, users of open source but don't actively contribute back and hence why sometimes I refer to them as <i>Welfare</i>. Maybe in the case of the this group it is just a failure to engage, the product just works and they have no need to be involved outside of viewing forums and stackoverflow. But I feel that without the involvement of this group a lot of open source software, no matter how good, can fall by the wayside. </div><div><br /></div><div>But how do we get them involved? Well first we have to find them, in my case with PartCover as the project had been abandoned the users stopped raising issues on the SourceForge forums and tended to ask questions on other outlets such as <a href="http://stackoverflow.com/">StackOverflow</a>, SharpDevelop or Gallio forums and mailing lists. </div><div><br /></div><div><b>Finding the users</b></div><div><br /></div><div>I scoured the internet and compiled a list of popular places that PartCover was <a href="https://github.com/sawilde/partcover.net4/wiki/Usages-of-PartCover">mentioned or supported</a>. I was surprised to find that PartCover was used or supported by <a href="http://sharpdevelop.net/opensource/sd/">SharpDevelop</a>, <a href="http://www.jetbrains.com/teamcity/">TeamCity</a> and <a href="http://www.typemock.com/">TypeMock</a> amongst others (and yet again I am surprised it was abandoned and not adopted by anyone sooner).</div><div><br /></div><div><a href="http://stackoverflow.com/">StackOverflow </a>seems to be the main place where people ask questions and to keep track of questions I have subscribed to an RSS feed for the partcover tag; and as soon as an opencover tag becomes available, or I get enough rep to create it, I'll subscribe to that.</div><div><br /></div><div>Twitter is also quite a common medium nowadays so I have also set up the following search filter "opencover OR partcover -rt -via" to see if anyone mentions either of the projects.</div><div><br /></div><div><b>Engaging the users</b></div><div><br /></div><div>Now I have found the users, or the majority of, I started notifying these lists, forums and projects that PartCover was alive again (and I have started to do the same to inform them about OpenCover). Hopefully bringing them back or at least notifying them that if they have really big issues there is somewhere to go.</div><div><br /></div><div><b>Involving the <i>Community </i>users</b></div><div><br /></div><div>This is the big ask and I don't have an answer. If the product works then they don't need to talk to the forums or declare their appreciation of a job well done. I think sites like <a href="https://www.ohloh.net/">ohloh</a> are trying to address the balance. Some OS projects have a donate button, but I am not sure we are doing open source for money, though some projects do eventually go commercial, anyone else can pick up the original code and develop it. Maybe the users don't know how to be involved, in the case of my OS projects they are quite specialised and the learning curve may be too much for some. But I don't think you have to just be involved in projects you use a lot. </div><div><br /></div><div><b>Possible ways to get involved</b></div><div><br /></div><div>If you are good at graphics why not offer to knock up some graphics for use on web-sites and in the application. [I am quite lucky that Danial Palme added support for PartCover and OpenCover to his <a href="http://reportgenerator.codeplex.com/">Report Generator</a> tool and has done a much better job than I would ever do.]</div><div><br /></div><div>If you are good at installers, or even if you want to learn more about them, offer to manage them on behalf of the project.</div><div><br /></div><div>If there is a project you like, support them on the forums like StackOverflow and help other users. </div><div><br /></div><div>Perhaps update the wikis and forums, sometimes the users know how a product works or can be used better then the developers.</div><div><br /></div><div>If your company uses a lot of open source, why not buy some licenses for useful software tools and donate them, geeks love shiny new toys; quite a few vendors such as <a href="http://www.ndepend.com/">NDepend</a> will donate licenses to open source projects.</div><div><br /></div><div>If you have an issue, try to help the developers as much as possible to resolve it by supplying as much information as you can and repeatable samples, remember the developers are international and doing this in their own time (as you probably know trying to repeat a scenario from scant information is very frustrating) and maintain contact whilst it is being resolved and let them know when it is.</div><div><br /></div><div>Okay that's me done on the subject for now, suggestions anyone?</div><div> </div><div><br /></div></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-5788222338675695087.post-81301982795947800132011-06-18T16:37:00.001+10:002011-06-19T11:38:31.327+10:00OpenCover First Beta ReleaseOkay, the first post on a blog I created many, many months ago and still not got round to starting. Why the delay? Well, just been busy and not a lot to say; actually some would say I have too much to say it's just not publishable.<div><br /></div><div>But now I am happy to announce that the first release of OpenCover is now available on GitHub <a href="https://github.com/sawilde/opencover/downloads">https://github.com/sawilde/opencover/downloads</a>.<div><br /></div><div>"So what?" I hear you say, "we have NCover, dotCover and PartCover [and probably many others with the word cover in the name,] do we need another code coverage tool?" Well, I think the answer is "Yes!" but before I say why a brief history.</div><div><br /></div><div>About a year ago I adopted PartCover when I found it lost and abandoned and only supporting .NET2; also PartCover has a large user base, SharpDevelop, Gallio, to name but two, and I felt it was a shame to just let it fall by the wayside. I had also done some work on CoverageEye (another OpenSource tool that was originally hosted on GotDotNet and has since vanished) whilst working for a client in the UK, so I felt I had a fighting chance to do the upgrade to .NET4; I don't know if my changes ever got uploaded to GotDotNet as I was not in charge of that.</div></div><div><br /></div><div>The adoption was far from easy for a number of reasons, one of which was I was surprised just how little C++ I could actually remember and it's changed a bit since I last used it in anger. Also due to lack of communication with the original developers meant that I was on my own in working out a) how it worked and b) just what the issues are (a lot of the reported issues had long since been abandoned by the reporters). </div><div><br /></div><div>At the beginning of the adoption I cloned the SourceForge repository to GitHub, git being the in-thing at the time, and after I was eventually admitted access to SourceForge I attempted to maintain both repositories. Due to the lack of permissions on SourceForge, no matter how many times I asked, I eventually abandoned SourceForge and kept all development to GitHub; I also updated the SourceForge repository with a lot of ReadMe posts to point to GitHub.</div><div><br /></div><div>So upgrading PartCover progressed and thankfully bloggers such as <a href="http://blogs.msdn.com/b/davbr/">David Broman</a> had already covered the subject matter about upgrading .NET2 profilers to .NET4 and things to look out for. That, it would turn out, was the easy bit. </div><div><br /></div><div>PartCover had 3 main issues (other than lack of .NET4 support)</div><div>1) Memory usage</div><div>2) 64 bit support</div><div>3) If the target crashed then you got no results.</div><div><br /></div><div>I'll tackle each of these in turn:</div><div>1) Memory - PartCover builds a model of each assembly/method/instrumented point in memory; though I managed to cut down memory usage by moving some of the data gathering to the profiler host it wasn't enough - PartCover also added 10 IL instructions (23 bytes) for each sequence point identified + 4 bytes allocated memory for the counter.</div><div><br /></div><div>2) 64 bit support - PartCover used a complex COM + Named Pipe RPC, which thankfully just worked but I couldn't work out how to upgrade it to 64 bit (a few other helpers have offered and then gone incommunicado, I can only assume the pain was too much).</div><div><br /></div><div>3) Crashing == no results - this was due to the profiler being shutdown unexpectedly and the runtime not calling the <a href="http://msdn.microsoft.com/en-us/library/ms230217.aspx">::Shutdown</a> method and as such all that data not being streamed to the host process; thankfully people were quite happy to fix crashing code so not a major issue but still an annoyance.</div><div><br /></div><div>All of this would take major rework of substantial portions of the code and the thought was unbearable. I took a few stabs at bits and pieces but got nowhere. </div><div><br /></div><div>Thankfully I had received some good advice and though I tried to apply it to PartCover I realised the only way was to start again, taking what I had learned from the guys who wrote PartCover and some ideas I had come across from looking at other opensource tools such as CoverageEye and Mono.Cecil.</div><div><br /></div><div><b>OpenCover was born.</b> </div><div><br /></div><div>This time I created a simple COM object supporting the interfaces and then made sure I could compile it in both 32 and 64 bit from day one. </div><div><br /></div><div>I then decided to make the profiler as simple as possible, so it is maintainable and move as much of the model handling to the profiler host, thank heavens for <a href="https://github.com/jbevain/cecil">Mono.Cecil</a>. The only complex thing was deconstructing the IL and reassembling it after it had been instrumented. OpenCover only inserts 3 IL instruction (9/13 bytes depending on 32/64 bit) per instrumented point; it forces a call into the profiler assembly itself and this C++ code then records the 'hit'. </div><div><br /></div><div>Finally I decided I had to get the data out of the profiler and into the host as soon as possible. I toyed with WCF and WWSAPI but this also meant I had no XP support, but at least I could test other ideas. However if my target/profiler crashed I would loose the last packet of data; not drastic but not ideal. Eventually I bit the bullet and switched to using shared memory.</div><div><br /></div><div>The switch to shared memory has brought a number of benefits one of which is the ability to handle a number of processes under the same profiling session, both 64 and 32 bit and to aggregate the results as they all use the same shared memory. I have yet to work out how to set this up via configuration files but anyone wishing to experiment can do so via modifying the call to ProfilerManager::RunProcess in the OpenCover.Host::Program.</div><div><br /></div><div>So this is where we are now, OpenCover has been released (beta obviously) and as of time of writing some people have actually downloaded it. I am now braced for the issues to come flooding/trickling in.</div><div><br /></div><div>Feel free to download and comment, raise issues on GitHub, get involved; Daniel Palme, he of <a href="http://reportgenerator.codeplex.com/">Report Generator</a> fame, is hopefully going to upgrade his tool to include OpenCover.</div><div><br /></div><div> </div><div><br /></div>Unknownnoreply@blogger.com9