Thursday, April 3, 2014

Customsing New Relic installation during Azure deployments

For about a year we've been running New Relic to monitor our WebRoles running on the Azure platform. Installing has been quite simple by following the instructions initially found on the New Relic site and is now available via Nuget; however two things about this process have been irking me.

First, I wanted to be able to distinguish the CI and Production deployments in the New Relic portal by making them have different names, but the name as it appears in the New relic portal is controlled through a setting in the web.config and cannot be controlled though the Azure portal.

Second, I wanted to be able to control the licence key we used for CI (free licence, limited functionality) and Production (expensive licence, full functionality) deployments, however the key is embedded in the newrelic.cmd and is applied when the New Relic agent is installed; this is not easy to change during/post deployment.

The initial solution to both these problems involved producing two packages, one for the CI environment(s) and one for the Production environment. Instead of the normal Debug and Release build outputs, a 3rd target, Production, was used and the web.config was modified during the build process using a transform that changed the name to what was wanted. The licence key issue was resolved by have two newrelic.cmd items in the project and then packaging the required one with the appropriate build. This was not ideal but it worked in a fashion however the ProdOps guys were keen on having control over the name and licence key used in production.

Changing the Application name

New Relic gets the Application name from a setting in the web.config and so what is necessary is to read a setting in the Azure configuration and update the web.config. There are many ways to resolve this issue but the approach we took was based on the solution to an identical issue raised on GitHub.  

Form completeness I will however reiterate the steps below:

  1. In the ServiceDefinition.csdef file add a setting to the  <ConfigurationSettings/> section

  2. <ConfigurationSettings>
      <Setting name="NewRelicApplicationName" />
    </ConfigurationSettings>
    

  3. In the ServiceConfiguration file for your environment add a setting that will be used to set the Application name in New Relic

  4. <ConfigurationSettings>
      <Setting name="NewRelicApplicationName" value="MyApplication" />
    </ConfigurationSettings>
    

  5. In the WebRole.cs file for your application amend your code with the following

  6.     public class WebRole : RoleEntryPoint
        {
            public override bool OnStart()
            {
                ConfigureNewRelic();
    
                return base.OnStart();
            }
    
            private static void ConfigureNewRelic()
            {
                if (RoleEnvironment.IsAvailable && !RoleEnvironment.IsEmulated)
                {
                    string appName;
                    try
                    {
                        appName = RoleEnvironment.GetConfigurationSettingValue("NewRelicApplicationName");
                    }
                    catch (RoleEnvironmentException)
                    {
                        /*nothing we can do so just return*/
                        return;
                    }
    
                    if (string.IsNullOrWhiteSpace(appName))
                        return;
    
                    using (var server = new ServerManager())
                    {
                        // get the site's web configuration
                        const string siteNameFromServiceModel = "Web";
                        var siteName = string.Format("{0}_{1}", RoleEnvironment.CurrentRoleInstance.Id, siteNameFromServiceModel);
                        var siteConfig = server.Sites[siteName].GetWebConfiguration();
    
                        // get the appSettings section
                        var appSettings = siteConfig.GetSection("appSettings").GetCollection();
                        AddConfigElement(appSettings, "NewRelic.AppName", appName);
                        server.CommitChanges();
                    }
                }
            }
    
            private static void AddConfigElement(ConfigurationElementCollection appSettings, string key, string value)
            {
                if (appSettings.Any(t => t.GetAttributeValue("key").ToString() == key))
                {
                    appSettings.Remove(appSettings.First(t => t.GetAttributeValue("key").ToString() == key));
                }
                
                ConfigurationElement addElement = appSettings.CreateElement("add");
                addElement["key"] = key;
                addElement["value"] = value;
                appSettings.Add(addElement);
            }
        }
    
And that should be it

Changing the New Relic licence key

The New Relic licence key is applied when the New Relic agent is installed on the host so what we is needed is to read the Azure configuration when the newrelic.bat is executed as part of the Startup tasks (defined in the ServiceDefinition.csdef) and apply it when the agent is installed. There does not appear to be way of changing the licence key if your agents have already been installed other than reducing the number of instances to 0 and then scaling back up (I suggest you use the staging slot for this).

  1. In the ServiceDefinition.csdef file add a setting to the  <ConfigurationSettings/> section

  2. <ConfigurationSettings>
      <Setting name="NewRelicLicenceKey" />
    </ConfigurationSettings>
    

    and add a new Environment variable to the newrelic.cmd startup task that will be set by the new configuration setting

    <Task commandLine="newrelic.cmd" executionContext="elevated" taskType="simple">
            <Environment>
              <Variable name="EMULATED">
                <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" />
              </Variable>
              <Variable name="NewRelicLicence">
                <!-- http://msdn.microsoft.com/en-us/library/windowsazure/hh404006.aspx -->
                <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/ConfigurationSettings/ConfigurationSetting[@name='NewRelicLicenceKey']/@value" />
              </Variable>
              <Variable name="IsWorkerRole" value="false" />
            </Environment>
          </Task>
    

  3. In the ServiceConfiguration file for your environment add a setting that will be used to set the Application name in New Relic

  4. <ConfigurationSettings>
      <Setting name="NewRelicLicenceKey" value="<ADD YOUR KEY HERE>" />
    </ConfigurationSettings>

  5. Edit your newrelic.cmd to use the Environment variable

  6. :: Update with your license key
    SET LICENSE_KEY=%NewRelicLicenceKey%

Now you should be able to control the New Relic licence key during your deployment.

Saturday, December 14, 2013

Book Review - Building Mobile Applications Using Kendo UI Mobile and ASP.NET Web API

I've written a book review on 'Building Mobile Applications Using Kendo UI Mobile and ASP.NET Web API' and posted it up on CodeProject. Summary I liked this book and I took a lot form it that I am now using to build that sample application using KendoUI. If you want to learn about ASP.NET Web API then this book isn't for you and you'll learn a lot more from the ASP.NET Web API site.

Sunday, September 15, 2013

Application Tracing

So OpenCover is as feature complete as I care to take it at the moment, I may do this one feature involving Windows Store applications should I have a need for it, and I decided to not continue with OpenMutate as I can't really find a need for it other than an exploratory investigation into reJIT.

I do have one more itch to scratch when it comes to profilers and that is application tracing and this may allow me to play with other technologies which I'll list later. This itch started a few months back, perhaps 6+ months ago, when I was trying to integrate some commercial tracing applications to an application I was working on and they both died horribly and I started to look for alternatives and found nothing available. Now I could have started the project then but I decided to pester both vendors until they fixed the problem which they eventually did (within a week or two of each other or so it seemed to me) and I integrated one of the solutions and moved on... but the itch never went away.

So what am I thinking... well a profiler (obviously) with 32/64 support (again given) and making obscene abuse of the COR_PRF_MONITOR_ENTERLEAVE functionality. The problem here is I don't really know what people will want to track (hey that is why there are companies that do this thing with BAs and such like to decide on this) so in the first instance I'll go at tracing everything (which will probably be very slow) and go from there.

This leads to the next problem, data, lots of it, lots and lots of it, and that data is going to need a home but a home I can then use to create reports at some point. For this I am thinking asynchronous, initially a queue and potentially an event source like data store like EventStore or NEventStore. The advantage of an event source would allow the ability to regenerate the views once we know what they are, perhaps something along the lines of Splunk or SplunkStorm would come into play.

So a name... always the hardest part but thankfully we have the internet and online dictionaries so I've gone with OpenDiscover.

Saturday, March 9, 2013

Creating a simple NodeJS app with Mongo

Okay, I woke up this morning (6am) with a need to create a simple reporting dashboard to display the coverage results from OpenCover when it dog-foods its own tests. Now that OpenCover has no reported bugs, I decided to use my spare time to investigate other technologies for a while.

What I needed was simple 'online' storage to capture results from the build system and the ability to extract that data into charts. Normally I'd probably knock up a simple rails app because it is easy to do, however I decided, probably due to the heat, to use the following:

  • node.js - a technology I haven't used but have meant to for a while; I also like the JavaScript syntax better than ruby (it's a personal thing)
  • mongodb - a database I am familiar with
  • google charts - free; as in beer.
  • heroku - free; well my intended usage will be.
A quick time-boxed search of the web about how to use node with mongodb and create a RESTful API and I settled on the following packages:
  • mongoose - for interacting with the mongo database
  • restify - for creating a simple rest server
  • nodemon - monitors changes in your app and restarts; sounds useful
I'll assume other packages will be added to the mix as challenges present themselves. It's now 7am and time for breakfast and then the fun starts... 

And a few hours later we have a simple storage system hosted on heroku all we need now is the charts

The repository can be found on github

I am sure it will evolve over time but it was very simple to get to this stage by leveraging the work of all those who have gone before.

Saturday, August 25, 2012

MongoDB, Mongoid, MapReduce and Embedded Documents.

I am using Mongoid to store some data as documents in a MongoDB database and then run some MapReduce queries against the data. Now I have no trouble with mapping data from normal documents and an embedded document but I could not extract data from an embedded collection of documents i.e.

class Foo
  include Mongoid::Document

  #fields
  field :custom_id, :type => String

  #relations
  embeds_many :bars

end
class Bar
  include Mongoid::Document

  #fields
  field :custom_field, :type => String

  #relations
  embedded_in :Foo

end

First it looks like that we need to run the map part of the MapReduce against the parent document and not the child i.e. Foo.map_reduce(...) will work find documents but Bar.map_reduce(...) does not, however that is not surprising as it is also not possible to count all Bar documents by doing Bar.all.count in the rails console.

Now a MapReduce query in MongoDB is done as a pair of JavaScript scripts, the first does the map by emitting a mini-document of data and the second that aggregates the data in some manner. So thinking I had a collection (array) my first attempt to map data from the embedded document was this:

MAP:
function() {
  if (this.bars == null) return;
  for (var bar in this.bars){
    emit(bar.custom_field, { count: 1 });
  }
}

REDUCE:
function(key, values) {
  var total = 0;
  for ( var i=0; i< values.length; i++ ) {
    total += values[i].count;
  }
  return { count: total };
}

This produced an unusual result such that there was only a single aggregated document with a null key and the count was the total number of child documents (summed across all the parents).

Now I could have just broken the child document out and not embedded it but I didn't want to break the model over something so trivial that must, in my eyes, be possible.

After much googling and reading of forum posts, I couldn't find any samples. I eventually observed of some 'unusual' syntax on an unrelated topic which led me to rewrite the map script into this:

function() {
  if (this.bars== null) return;
  for (var bar in this.bars){
    emit(this.bars[bar].custom_field, { count: 1 });
  }
}

Which produced the expected results. Okay this was probably obvious to anyone who knows MongoDB+MapReduce well but it took me a while to find out and it still isn't that intuitive, though I think I now know why it is this way, so I thought I'd write it up as a bit of a reference.

Friday, June 8, 2012

The "Pigs" and "Chickens" fable

I think anyone who is anyone who has heard of Agile and Scrum have heard of the Pigs and Chickens story and how it describes those who are committed to the delivery of the project, "Pigs", and those who are just involved, "Chickens." If not click on the image and learn more about it.

Implementing Scrum - Pigs and Chickens

However I was just recently re-reading "Death March" by Edward Yourdon (1st Edition) and I came across this response to the parable, in the context of commitment whilst on a death march.
“I’m not sure you will find any old pigs in development perhaps more chickens. I think that kind of commitment continues until (inevitably?) you get into the first death march project – then there is a rude awakening. Either the pig realises what’s happening, this is the slaughterhouse! RUN!! Or the pig is making bacon…” - Paul Mason (Death March).
I just found it quite amusing and thought I should share...




Friday, February 3, 2012

Mutation Testing; a use for re-JIT?

Where to start...
Mutation testing is described as modifying a program in small amounts and then executing the original 'passing' tests that exercise that code and then watching them fail. It is a way of making sure your tests are actually testing what you believe they are testing.

Setting the stage...
So how can we do this with .NET? Well first we need to know what tests execute what code and we can use OpenCover for that when it is using it's tracking by test feature. With that feature we can see which tests execute which sequence points and also see what branches were exercised, it is this later information we can take advantage of when creating a mutation testing utility.

New toys to play with...
Now this mutation tester is going to be working at the IL level and as such we could use the JIT (Just-in-time) compilation feature that is used with OpenCover (and PartCover). However that would mean a complicated instrumentation that we would then have to control which path we would want to exercise, or we could have simpler instrumentation but that would require the process under test (e.g. nunit, mstest, ...) to be stop and started each time to allow new code to be exercised. With .NET 4.5 (in preview at the time of writing) there is a re-JIT compilation feature that we could use instead and this would allow us to use simple instrumentation without needing to stop and start the process under test. There are a number of limitations of re-JIT but after reviewing them (several times) I don't think any are actually show stoppers.

However to make the Re-JIT useful we need a way of executing a test or tests repeatedly without having to restart the application under test and this isn't possible with nunit and mstest. However it should be possible to use the test runners from AutoTest.Net if we host them directly or in a separate process that can be communicated with.

A plan...
So the flow will be something like this (I wonder how will this will stand up to the test of time) I haven't looked at the latest profiler API in-depth but documentation on MSDN) and David Broman's Blog seem to indicate this should be possible.

  • Run OpenCover to produce an XML file with a list of what tests exercised what branches
  • For each branch point =>
    • if first branch of method then store the original IL (as we will need repeated access to this IL)
    • (re)instrument the method that contains that branch point and using the original IL of that method invert the logic of only that point
    • execute each test that exercises that branch point => record pass, fail
    • if last branch of method then revert method to original IL
All it needs is a name...
All of this will be hosted on GitHub under OpenMutate. Let the games begin....