MTM Azure Data Diagnostic Adapter a nice ALM for Cloud scenario …

Collect Azure events, performance data and trace logs during test execution and populate them in the test result and bugs reports. So, team members easily can see what happened in the cloud (for any instance) during the test case for faster and better bug solving.

image

Azure logs can get big and fuzzy, this together with the locally captured (and removed) data by instances, will make it hard to use these logs for testing scenario’s.

The test execution date is most often not the same date the developers looks at it, this results in the fact that when the developer wants to look in the Azure logs, events and/ or performance counters he/she has to dive in to the mass amount of collected data, if this data still exists due to recycle instances. So, it would be easier, faster and better if this data is available in the report.

Microsoft Test Manager Diagnostic Data Adapters.

Within Microsoft Test Manager settings can be set at test plan level which data needs to be captured from the environment the test is runs. (Read: Setting Up Machines and Collecting Diagnostic Information Using Test Settings).

It is also easy to to create your own custom data diagnostic adapter for Microsoft Test Manager, see this post “Custom Diagnostic Data Adapter capture the Webcam”.

image

Microsoft.WindowsAzure.Diagnostics

Azure has a very easy to use namespace for monitoring Azure applications, the Microsoft.WindowsAzure.Diagnostics namespace. this namespace provides capabilities to capture: application crash dumps, performance counters, Windows event logs, Windows Azure logs (trace) and file-based data buffers (some problems with this after 1.3 sdk). Initializing and Configuring these events, performance data and trace logs for your Azure application is easy. (See: Initializing and Configuring Diagnostic Data Sources).

When you configure an Azure application to use this namespace and capture diagnostic data is easy.

//Windows Event Logs
config.WindowsEventLog.DataSources.Add("System!*");
config.WindowsEventLog.DataSources.Add("Application!*");
config.WindowsEventLog.ScheduledTransferPeriod = TimeSpan.FromMinutes(5);
config.WindowsEventLog.ScheduledTransferLogLevelFilter = LogLevel.Warning;
          
//Azure Trace Logs
config.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
config.Logs.ScheduledTransferLogLevelFilter = LogLevel.Warning;
//Crash Dumps
CrashDumps.EnableCollection(true);

//IIS Logs
config.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(5);

//Diagnostic logs
config.DiagnosticInfrastructureLogs.ScheduledTransferLogLevelFilter = LogLevel.Warning;
config.DiagnosticInfrastructureLogs.ScheduledTransferPeriod = TimeSpan.FromMinutes(5.0);

This results in several WAD tables in the Azure Table storages.

SNAGHTML30d5437

With the corresponding storage account and deployment ID this information can be captured from a client tool.

var results = from g in wadDiagnosticInfrastructureLogsTableContext.WADDiagnosticInfrastructureLogsTable
where StartTime < g.Timestamp && g.Timestamp < EndTime
select g;

Writing this result to a file and add it to the test case does the trick.

Configuration

Microsoft.WindowsAzure.Diagnostics also has the capability to “Modifying the Diagnostic Monitor Configuration Remotely”. this give the Microsoft Test Manager Azure Data Diagnostic Adapter the capability to set properties “what information to capture and add to the test result” separate from what is configured in the Azure application.

for example the log level…

image 

 

How this looks like…

Windows Azure Diagnostics MTM Adapter

First a run which captures only warning level trace logs then a run which captures also Information trace logs from the sample Azure application.

Some thoughts.

This solution isn’t rock solid, actually it is really conceptual.
Two challenges; first, the configuration is published to the Azure application, the application has the configuration. So, when somebody else changes it, you don’t know what you are capturing. The second is the ‘wait’ time. It takes Azure some time to push the logs to the table storage, it´s not real time, so when capturing the data it can happen that not all rows/events for your run are collected.

Will publish the sources later on, too much hard coded values in there at this moment Glimlach

Add comment