https://salvoz.com/Salvo(z)20202020-05-31T21:40:16ZWelcome to Salvo(z)https://salvoz.com/posts/2018-01-01-static-content.htmlConverting from Word Press to Static Content Siteasalvo2018-01-01T07:00:00Z<h3 id="background">Background</h3>
<p>My salvoz.com blog site went through numerous iterations over the years, both in hosting and blogging platform. It initially was self-hosted on DasBlog, then moved to GoDaddy, and finally to WordPress and MediaTemple for hosting.</p>
<p>WordPress and MediaTemple were okay, but I wasn't using, nor did I need the full capabilities of WordPress. In fact, most of the plug-ins I used were for security and performance improvements. I also wanted to redirect the MediaTemple hosting fees towards another project.</p>
<p>Therefore, I decided to jump on the Static Content generator bandwagon. The site consists of a collection of files (markdown, html templates, meta-data) and is built whenever a change is made to static HTML files, and finally deployed. The advantages are raw performance and little concern for site security.</p>
</br>
<h3 id="the-plan">The Plan</h3>
<ul>
<li>Static Content Generator: <a href="https://wyam.io">Wyam</a></li>
<li>Comments: <a href="https://disqus.com/home/forum/salvoz/">Disqus</a></li>
<li>Host: <a href="https://netlify.com">Netlify</a></li>
<li>Build: <a href="http://cakebuild.net">Cake</a> and <a href="https://www.jetbrains.com/teamcity">TeamCity</a></li>
<li>WordPress Export: <a href="https://wordpress.org/plugins/jekyll-exporter/">Jekyll Exporter</a></li>
<li>Content Editor: VS Code</li>
</ul>
</br>
<h3 id="exporting-the-site">Exporting the Site</h3>
<p>I started out by using WinSCP to download my entire site from Media Temple as a backup and then converted my site to use Disqus for comments. This was straight-forward - I just needed to install the Disqus WordPress plugin and follow the migration steps. I then installed Jekyll Exporter to export the content of my site to Jekyll files.</p>
<p>I ran into a few problems around WordPress and PHP with Jekyll Exporter, mainly the error "cannot use output buffering in output buffering display handlers". I tried the following to resolve this issue:</p>
<ul>
<li>Upgraded all plug-ins</li>
<li>Upgraded WordPress</li>
<li>Confirmed I was running a recent version of PHP</li>
<li>Set debug mode in wp-config.php
<ul>
<li>define('WP_DEBUG', true);</li>
</ul>
</li>
<li>Increased the memory limit, also in wp-config.php
<ul>
<li>define('WP_MEMORY_LIMIT', '512M'); //What finally fixed it.</li>
</ul>
</li>
</ul>
<p>With my images downloaded via WinSCP and a collection of Jekyll markdown files, I was ready to start the conversion to Wyam.</p>
</br>
<h3 id="wyam-conversion">Wyam Conversion</h3>
<p>Following the <a href="https://wyam.io/docs/usage/obtaining#chocolatey">instructions</a> I installed Wyam using Chocolatey.</p>
<div class="code-block">
<pre><code class="language-shell">choco install Wyam -s https://www.nuget.org/api/v2/
</code></pre>
</div>
<p>Next I created a new site using the blog recipe.</p>
<div class="code-block">
<pre><code class="language-shell">wyam new -r blog
</code></pre>
</div>
<p>And to run the site in preview mode, with live re-load</p>
<div class="code-block">
<pre><code class="language-shell">wyam -r blog -p -w
</code></pre>
</div>
<p>After I had the basic folder structure setup by Wyam, I went about importing my content. First, I made sure that I had Git LFS setup for my repository so images and binary files where handled more efficiently in Git. Ultimately, I decided to upload all of my old images to Azure Blog Storage instead of redeploying them with the site every time.</p>
<p>I wrote code to convert the Jekyll meta-data in the markdown posts to Wyam and update image links. I defined a standard for code snippets in posts; I use the markdown pre/code identifier ( ``` ) and <a href="https://highlightjs.org/">highlight.js</a>, and a div tag with a little extra formatting for margins and padding.</p>
<p>The most time consuming part of the migration was reviewing all the old posts and cleaning up the code snippets and lists (the Jekyll exporter seemed to have a problem exporting these).</p>
</br>
<h3 id="hosting">Hosting</h3>
<p>Originally I planned to host the site on Azure Function proxies ("free consumption based plan") in front of Azure Blob storage. Azure Function proxies handle the following:</p>
<ul>
<li>Lack of default document support in Blob Storage</li>
<li>Custom domain with SSL support</li>
<li>Redirects from my old blog URL format to the new</li>
</ul>
<p><a href="https://markheath.net/post/static-websites-azure-blob-storage-functions-proxies">Mark Heath</a> has a write-up on how to use Azure Functions to accomplish the above tasks, and was my initial inspiration for using Azure. I deployed my entire site to blob storage and DNS cut-over before I decided to switch to Netlify.</p>
<p>Netlify is where the Wyam site is hosted, as well as the lead maintainer's blog. It is a hosting provider that is focused on static sites, and offers a generous free hosting plan for even commercial endeavors. The Netlify <a href="https://www.netlify.com/docs/">documentation</a> is a great place to see all of their features, but the ones I was most interested in are:</p>
<ul>
<li>Robust Redirect and Rewrite Rules</li>
<li>Custom 404 page support</li>
<li>Free SSL on Custom Domains (using Let's Encrypt)</li>
<li>Hands-off deployment to their CDN for an extra performance boost</li>
<li>Support for command line deployments</li>
</ul>
<p>The redirect and rewrite rules are pretty important to get your root domain to redirect to www and vice-versa depending on what you want your primary URL to be. For this site, I've been using the root of the domain, salvoz.com for several years. I have another site that I converted to Wyam and hosted on Netlify and decided to go with www. I discovered that having a DNS CNAME record for your root domain is technically not allowed as per the DNS spec; however, some Name Server providers, such as ZoneEdit and Netlify allow this. Azure DNS prevents it, and while NameCheap allows you to do this, if you have a MX record pointing at your root domain, it will not resolve with a CNAME also pointing at the root.</p>
</br>
<h3 id="build-and-deployment">Build and Deployment</h3>
<p>I use Cake to write my build scripts, and Team City to run the Cake Script whenever the master branch of my repository is updated. As I mentioned earlier, I had my site deploying to Azure before Netlify, and I still have my build script and Team City setup to support this. In fact, I still deploy to both locations just to keep track of the progress of Azure Blobs as a static content site.</p>
<p>There is a Cake add-in for Wyam, which is well-documented, and you can look at the Wyam github repository for an example of building and deploying to Netlify, which I followed for the most part. I ended up with a PowerShell script invoked from Cake to upload to Netlify. Using Curl, as the Wyam site does, turned into a bit of a hassle on my Windows-based build agents. Once I installed Curl using Chocolatey, I got certification validation errors, so I decided to go with something more native to Windows.</p>
<p>Also as part of my Cake script, I set keys for AppInsights and Google Analytics. You can see in my script that I pass the Azure Storage account key and Netlify API token in as parameters. These are set as password fields in Team City to afford some level of protection. The keys for AppInsights and Google Analytics I leave in the build script, since they are visible to anyone who looks at the source of the published web pages.</p>
<p>Rather then display my full Cake script here, I will simply <a href="/assets/post-content/2018/2018-01-01-build.txt">link to a copy</a>.</p>
</br>
<h3 id="disqus-update">Disqus Update</h3>
<p>Once the site was up and running with the new URLs for the posts, I updated my Disqus comments accordingly. Disqus has 3 ways you can update the comment to post association, which include: Manual, CSV File, and an Automated Crawl. Manual and CSV should be self explanatory. The automated crawl, which is the method I used, require that you have 301 redirects setup from your posts' old URLs to their new URLs, which I did. So after a couple of hours, I had all my comments updated and showing up on my posts once again.</p>
<br>
<h3 id="closing-thoughts">Closing Thoughts</h3>
<p>While it ended up to be a little more work then I thought, I am pleased with how everything turned out. While I've used VS Code a bit for editing my Cake scripts, I've become quite happy working inside VS Code during this migration project. A big thanks goes out to my wife who spent considerable time going through the old posts.</p>
<p>My salvoz.com blog site went through numerous iterations over the years, both in hosting and blogging platform. It initially was self-hosted on DasBlog, then moved to GoDaddy, and finally to WordPress and MediaTemple for hosting.</p>https://salvoz.com/posts/2016-08-23-detecting-device-idiom-on-xamarin-uitest.htmlDetecting Device Idiom on Xamarin UITestasalvo2016-08-23T08:01:37Z<p>When writing UI Tests with Xamarin UITest, you may want to take a different action when testing on a tablet form factor (idiom) or phone. There is some built in support in Xamarin UITest for detecting this on the iOS side, but nothing on the Android side. The solution I came up with was to use a <a href="https://developer.xamarin.com/guides/testcloud/uitest/working-with/backdoors/" target="_blank">backdoor</a> method to give me the information I needed.</p>
<p>First, let’s take a look at how we create an instance if IApp, as we will need this for both iOS and Android. I am assuming the user has a basic understanding of how Xamarin UITest works.</p>
<p>When setting up your Test, you will using the FluentAPI built off of ConfigureApp to create an App for either Android or iOS. This creates a platform specific instance that implements IApp. </p>
<div class="code-block">
<pre><code class="language-c#">public static IApp StartApp(Platform platform)
{
if (platform == Platform.Android)
{
return ConfigureApp.Android.DoSomeCustomAndroidConfig.StartApp();
}
return ConfigureApp.iOS..DoSomeCustomiOSConfig().StartApp()
}
</code></pre>
</div>
<p>In my base test class, I call StartApp and then check to see if I’m running on a phone or tablet.</p>
<div class="code-block">
<pre><code class="language-c#">[SetUp]
public void SetUp()
{
App = StartApp(Platform);
//Figure out what kind of Idiom we have.
//Use the built in support for iOS and a backdoor method for Android
var iOSApp = App as iOSApp;
if (iOSApp != null)
{
IsPhone = iOSApp.Device.IsPhone;
}
else
{
var idiom = App.Invoke("BackdoorGetIdiom");
IsPhone = idiom.Equals("Phone");
}
}
</code></pre>
</div>
<p>We first check to see if our App is of type iOSApp, and if it is, we’ll use the built in support in Xamarin UITest. If we are running on Android, then we’ll use the <a href="https://developer.xamarin.com/guides/testcloud/uitest/working-with/backdoors/" target="_blank">backdoor</a> method approach, which I’ll outline next.</p>
<p>All I had to do was create my backdoor method and expose it from my MainActivity which is in my Xamarin Forms Android project.</p>
<div class="code-block">
<pre><code class="language-c#">[Export("BackdoorGetIdiom")]
public string BackdoorGetDeviceIdiom()
{
return Device.Idiom.ToString();
}
</code></pre>
</div>
<p>You may need to add a reference to the Mono.Android.Export.dll. I found this in “C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\MonoAndroid\v6.0\Mono.Android.Export.dll”. Now, all you need to do is invoke the method as shown in my [SetUp] method.</p>
<p>There are various rules that you must follow when writing your backdoor methods which Xamarin outlines on their <a href="https://developer.xamarin.com/guides/testcloud/uitest/working-with/backdoors/" target="_blank">backdoor documentation page</a>. The rules differ between iOS and Android so be sure to read them carefully.</p>
<p>When writing UI Tests with Xamarin UITest, you may want to take a different action when testing on a tablet form factor (idiom) or phone. There is some built in support in Xamarin UITest for detecting this on the iOS side, but nothing on the Android side. The solution I came up with was to use a <a href="https://developer.xamarin.com/guides/testcloud/uitest/working-with/backdoors/" target="_blank">backdoor</a> method to give me the information I needed.</p>https://salvoz.com/posts/2015-04-19-ssms-run-as-different-user.htmlSSMS run as different userjsalvo2015-04-19T07:31:25Z<p>Run SSMS as user specified by <domain><username>:</p>
<div class="code-block">
<pre><code class="language-sql">runas /netonly /user:<domain>\<username> C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\ssms.exe
</code></pre>
</div>
<p>Run SSMS as user specified by <domain><username>:</domain></p>https://salvoz.com/posts/2015-04-12-rolling-x-month-x-week-calculations-in-mdx-using-nonempty.htmlRolling X month / X week calculations in MDX using NonEmptyjsalvo2015-04-12T08:45:20Z<p>Implementing a rolling x month or x week calculation in MDX is relatively straightforward if you have a calendar hierarchy in the multi-dimensional cube. In some cases, you may only want to include dates where a measure is non-empty. For example, if a user selects the current month, you may not want to include any future weeks that are empty in the rolling calculation.</p>
<p>Here’s some MDX that implements a rolling 6-week calculation that only includes non-empty dates.</p>
<div class="code-block">
<pre><code class="language-sql">Ancestor(Tail(NonEmpty(Descendants(StrToMember(@Calendar),[Date].[Calendar].[Date]),[Measures].[Registration Count])).Item(0),[Date].[Calendar].[Calendar Week of Year]).Lag(6):Ancestor(Tail(NonEmpty(Descendants(StrToMember(@Calendar),[Date].[Calendar].[Date]),[Measures].[Registration Count])).Item(0),[Date].[Calendar].[Calendar Week of Year])
</code></pre>
</div>
<p>This implementation uses the DESCENDANTS function to identify all child dates based on the current position in the Calendar hierarchy (represented by the parameter @Calendar). We use the TAIL function to return the last date where the [Registration Count] measure is non-empty; then the ANCESTOR function returns the [Calendar Week of Year] associated to the date returned by the TAIL function. </p>
<p>Once we identify the week based on the current position in the calendar hierarchy we can then use LAG(6) to navigate 6 weeks back. The Range operator ‘:’ returns all the weeks from 6 weeks prior to the current week based on the calendar hierarchy position.</p>
<p>Here’s a complete MDX query using this logic. Note: I’ve replaced StrToMember(@Calendar) with the member [Date].[Calendar].[Calendar Year].&[2015].&[Q2 2015].&[Apr 2015].</p>
<div class="code-block">
<pre><code class="language-sql">SELECT {[Measures].[Registration Count]} ON COLUMNS,&nbsp; NON EMPTY [Date].[Calendar Week of Year].[Calendar Week of Year].MEMBERS ON ROWS
FROM [Cube]
WHERE Ancestor(Tail(NonEmpty(DESCENDANTS([Date].[Calendar].[Calendar Year].&[2015].&[Q2 2015].&[Apr 2015],[Date].[Calendar].[Date]),[Measures].[Registration Count])).ITEM(0),[Date].[Calendar].[Calendar Week of Year]).LAG(6):Ancestor(Tail(NonEmpty(DESCENDANTS([Date].[Calendar].[Calendar Year].&[2015].&[Q2 2015].&[Apr 2015],[Date].[Calendar].[Date]),[Measures].[Registration Count])).ITEM(0),[Date].[Calendar].[Calendar Week of Year])
</code></pre>
</div>
<p>Implementing a rolling x month or x week calculation in MDX is relatively straightforward if you have a calendar hierarchy in the multi-dimensional cube. In some cases, you may only want to include dates where a measure is non-empty. For example, if a user selects the current month, you may not want to include any future weeks that are empty in the rolling calculation.</p>https://salvoz.com/posts/2015-04-12-rolling-x-months-x-weeks-calculations-in-dax.htmlRolling X months / X weeks calculations in DAXjsalvo2015-04-12T07:56:47Z<p>I recently implemented rolling 12 month and rolling 6 week measures in a SSAS tabular cube for product registration counts.</p>
<p>The rolling 12 month calculation is straight-forward and can be implemented using built-in DAX formulas. This implementation is based off of Alberto Ferrari’s blog post. <a href="http://www.sqlbi.com/articles/rolling-12-months-average-in-dax/" title="http://www.sqlbi.com/articles/rolling-12-months-average-in-dax/">http://www.sqlbi.com/articles/rolling-12-months-average-in-dax/</a>.</p>
<div class="code-block">
<pre><code class="language-sql">Registration Count Rolling 12 Mos:=CALCULATE([Registration Count Current Period], DATESBETWEEN('Date'[Date], SAMEPERIODLASTYEAR(NEXTDAY(LASTDATE('Date'[Date]))),LASTDATE('Date'[Date])))
</code></pre>
</div>
<p>The LASTDATE function returns the last date contained within the active filter context of the cube. For example, if a user has filtered to July 2015 then last date will return July 31, 2015 based on a standard calendar implementation. The function SAMEPERIODLASTYEAR is using the day after the last date in our filtered dataset and identifies the same date in the prior year. DATESBETWEEN then identifies all the dates between last year’s and this year’s date.</p>
<p>The rolling 6 weeks implementation was less straightforward. I ended up implementing this calculation using a combination of a calculated column and measure in the cube.</p>
<p>First, I added a column called ‘ThisSaturday’ to the Date table in the cube. For each date in the Date table, it calculates Saturday’s date in the same week.</p>
<p>The calculated column formula is listed below:</p>
<div class="code-block">
<pre><code class="language-sql">=CALCULATE(MIN('Date'[Date]), FILTER(ALL('Date&#8217;),&#8217;Date'[Day of Week]=7 && 'Date'[Date] >= EARLIER('Date'[Date])))
</code></pre>
</div>
<p>We’re basically using the FILTER function to retrieve all the Saturday’s that are on or after the present date and then the MIN function identifies the Saturday that is nearest to the date.</p>
<p>Once we have this calculated column added to the date table, we can then apply it in our rolling 6 week measure.</p>
<div class="code-block">
<pre><code class="language-sql">Registration Count Rolling 6 Wks:=CALCULATE([Registration Count Current Period], DATESBETWEEN('Date'[Date], LASTDATE('Date'[ThisSaturday])-41,LASTDATE('Date'[ThisSaturday])))
</code></pre>
</div>
<p>In this measure, we use LASTDATE to identify the last date associated to the current filter context. We then subtract 41 to calculate the date 6-weeks prior and use DATESBETWEEN to retrieve all the dates between this date and the Saturday of the last date in the filter context. </p>
<p>I recently implemented rolling 12 month and rolling 6 week measures in a SSAS tabular cube for product registration counts.</p>https://salvoz.com/posts/2015-02-15-ssas-tabular-custom-prior-year-date-calculations.htmlSSAS Tabular Custom Prior Year Date Calculationsjsalvo2015-02-15T07:26:13Z<p>I am working on building a tabular cube and needed the ability to compare current fiscal year dates to prior fiscal year dates by matching the dates based on the same fiscal month and day of month and then performing calculations to compare sales between the current and prior fiscal year. The built-in DAX date calculations didn’t appear to meet my needs (at least I wasn’t able to obtain the desired results using them), so I implemented the necessary behavior using an inactive relationship from my date dimension to the sales fact table.</p>
<p>As a first step, I added two additional columns to my tabular cube date dimensions that represents the prior year date key and date. An example of the query is included below (note: for simplicity in this blog post I have not included all the date dimension attributes, an actual date dimension would likely have many more attributes than what I have included). One challenge I encountered is an occasional mismatch between the number of days in a given fiscal month when comparing two fiscal years (some fiscal years have more days in a particular fiscal month than the prior year); therefore, I am using a LEFT OUTER JOIN and then ISNULL to return an unknown for the LastYearDateKey column. In addition, this column needs to have a unique value so the unknown is set to be –1 multiplied by the prior year date key.</p>
<div class="code-block">
<pre><code class="language-sql">SELECT DateKey, [Date], ISNULL(c.LastYearDateKey,-1*DateKey) [LastYearDateKey],c.LastYearDate [LastYearDate]
FROM DimDate d
LEFT OUTER JOIN (SELECT c.DateKey CurrentYearDateKey, l.DateKey LastYearDateKey, l.Date LastYearDate
FROM DimDate l
INNER JOIN (SELECT DateKey,Date,FiscalYear - 1 [FiscalYear],FiscalMonth,FiscalDayOfMonth FROM DimDate) c ON l.FiscalYear = c.FiscalYear
AND l.FiscalMonth = c.FiscalMonth
AND l.FiscalDayOfMonth = c.FiscalDayOfMonth) c ON d.DateKey = c.CurrentYearDateKey
</code></pre>
</div>
<p>Now that I have the necessary columns added to my dimension, I created an inactive relationship from my sales fact table to the new LastYearDateKey column in the date dimension (The primary / active relationship from the date table to the sales fact table is based on the current year date key). The following shows the inactive relationship between the two tables (note that the active checkbox is unchecked).</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/02/image.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;border-left: 0px" border="0" alt="image" src="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/02/image_thumb.png" width="550" height="268" /></a></p>
<p>With the inactive relationship established, I can create measures using DAX calculations that reference this inactive relationship.</p>
<p>Here’s an example of a measure in my sales table that uses the inactive relationship:</p>
<p>Sales Amount PY:=CALCULATE(SUM([SalesAmount]),USERELATIONSHIP(Dates[LastYearDateKey],Sales[GLDateKey])).</p>
<p>Now, I can compare sales in the prior year to the current year using the custom date matching (based on fiscal month and day of month) defined in my date dimension query.</p>
<p>I am working on building a tabular cube and needed the ability to compare current fiscal year dates to prior fiscal year dates by matching the dates based on the same fiscal month and day of month and then performing calculations to compare sales between the current and prior fiscal year. The built-in DAX date calculations didn’t appear to meet my needs (at least I wasn’t able to obtain the desired results using them), so I implemented the necessary behavior using an inactive relationship from my date dimension to the sales fact table.</p>https://salvoz.com/posts/2015-02-15-bcp-of-large-dataset-to-sql-db-on-azure-virtual-machine.htmlBCP of large dataset to SQL DB on Azure Virtual Machinejsalvo2015-02-15T05:37:04Z<p>During the past 6 months, I’ve been working on re-designing an app that uses BCP to move data from thousands of similar (but not identical) databases each storing sales and other related data for individual bicycle dealers into a unified database on an Azure Virtual Machine (currently a D13 VM with 8 core, 56 GB and SSD) that stores the data for all dealers. The overall unified database is roughly 500 GB in size and is reloaded on a weekly basis, so the movement and loading of data needed to be optimized as much as possible.</p>
<p>Originally this process was implemented via Powershell scripts using BCP, file compression (.zip) and FTP. The data was first BCP’d from tables in the dealer’s database into a flat file format. Then the dealer’s flat files were compressed into .zip format and FTP’d. Another set of Powershell scripts ran on the receiving server (in Azure) and pulled the files down via FTP, unzipped them and executed BCP commands to move the data from the flat file into the database. The scripts processed one table at a time for a single dealer and once all tables completed it would move onto the next dealer in the queue. Overall, the total dataset was taking roughly 15-16 hours to load and was unsustainable.</p>
<p>To optimize the process further, the Powershell scripts were replaced with a C# application. The C# application is multi-threaded to load multiple tables in parallel as well as multiple dealers, the number of dealers that can be loaded in parallel is controlled by a parameter. After performing some benchmark testing, the parameter was set to 10 parallel locations since this appears to provide the best performance. The C# application uses the SqlBulkCopyService which directly BCPs data from tables in the dealer’s database to the appropriate tables in the unified database residing in Azure. This eliminated the flat files and the need to compress and FTP them. The C# application’s performance provided a significant improvement, the data is now loading in roughly 9 hours versus 15-16 hours with the Powershell scripts, thanks in large part to the parallel loading of tables and dealers.</p>
<p>In addition to re-working the code to move the data, several optimizations were also implemented involving the unified database and virtual machine in Azure. During initial performance testing while BCP’ing the data, the unified database log file was a bottleneck. The log file was originally on a persistent VHD stored on Azure blog storage. To improve log performance, the log file is temporarily moved to the D: drive (local non-persistent SSD drive) throughout the duration of the weekly load. Once the load completes, the log file is moved back to a persistent drive. We also have the database data file on a drive spread across 10 disks to help improve the IO performance and to allow for future growth. The index file is split across two disks for performance as well.</p>
<p>I’m sure this implementation will continue to evolve over time and I will write more blog posts as things progress.</p>
<p>During the past 6 months, I’ve been working on re-designing an app that uses BCP to move data from thousands of similar (but not identical) databases each storing sales and other related data for individual bicycle dealers into a unified database on an Azure Virtual Machine (currently a D13 VM with 8 core, 56 GB and SSD) that stores the data for all dealers. The overall unified database is roughly 500 GB in size and is reloaded on a weekly basis, so the movement and loading of data needed to be optimized as much as possible.</p>https://salvoz.com/posts/2015-01-19-using-log4net-in-ssis.htmlUsing Log4Net Nuget Package in SSISjsalvo2015-01-19T02:46:27Z<p>Log4Net is a robust logging framework that provides programmers granular control over logging to a variety of target outputs (examples include file, console, email, etc.). Detailed documentation can be found on the Apache Log4Net <a href="http://logging.apache.org/log4net/">site</a>.</p>
<p>It is possible to use Log4Net within SSIS script tasks/components, but the implementation is a bit different than when used within console or web applications. </p>
<p>1. Create or open an SSIS script task or component. Click ‘Edit Script’.</p>
<p>2. In the Solution Explorer window, right click on the solution and select ‘Manage NuGet Packages’. </p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;border-left: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image_thumb.png" width="244" height="141" /></a></p>
<p>If you do not see the option ‘Manage NuGet Packages’, then you will need to install the package manager extension. This can be done from within Visual Studio.</p>
<p>Go to Tools > Extension Manager.</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image1.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;border-left: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image_thumb1.png" width="244" height="157" /></a></p>
<p>Search the online gallery for ‘NuGet Package Manager’ and install.</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image2.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;border-left: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image_thumb2.png" width="439" height="136" /></a></p>
<p>3. In the ‘Nuget Package Manager’ window, click ‘Online’ in the left navigation pane and then search for Log4Net. Click ‘Install’ and then Close.</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image3.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;border-left: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image_thumb3.png" width="426" height="139" /></a></p>
<p>You should now see a Packages.config file added to your project. The Packages.config file stores the version information for the nuget package(s) installed. The following is an example:</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image4.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;border-left: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image_thumb4.png" width="523" height="156" /></a> </p></p>
<p>When a package is restored, Nuget references the Packages.config file to pull down the correct version of the package (in this case version 2.0.3).</p>
<p>4. In addition to adding the package via Nuget within the SSIS script task, the Log4Net.dll file will also need to be added to the GAC. There are several ways to add a dll to the GAC, Powershell is one option:</p>
<p>Set-location “c:\temp”</p>
<p>[System.Reflection.Assembly]::Load(“System.EnterpriseServices, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a”)</p>
<p>$publish = New-Object System.EnterpriseServices.Internal.Publish</p>
<p>$publish.GacInstall(“c:\temp\log4net.dll”)</p>
<p><em>Reference:</em> <a href="http://social.technet.microsoft.com/wiki/contents/articles/22618.how-to-install-a-dll-to-the-gac-on-windows-server-2012-using-only-powershell.aspx" title="http://social.technet.microsoft.com/wiki/contents/articles/22618.how-to-install-a-dll-to-the-gac-on-windows-server-2012-using-only-powershell.aspx">http://social.technet.microsoft.com/wiki/contents/articles/22618.how-to-install-a-dll-to-the-gac-on-windows-server-2012-using-only-powershell.aspx</a></p>
<p>5. Now we have reached the point where we can start writing some code. Generally, all Log4Net configurations are set in the App.config file or a custom Log4Net config file. </p>
<p>Since the DTEXEC.EXE application runs the SSIS packages, it is not possible to directly associate config files to the package script task/component. Therefore, all Log4Net configuration settings must be set programmatically.</p>
<p>I used the following StackOverflow post as a reference: <a href="http://stackoverflow.com/questions/2738130/using-log4net-as-a-logging-mechanism-for-ssis" title="http://stackoverflow.com/questions/2738130/using-log4net-as-a-logging-mechanism-for-ssis">http://stackoverflow.com/questions/2738130/using-log4net-as-a-logging-mechanism-for-ssis</a></p>
<p>First, add the following log4net references to the Namespaces</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image5.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;border-left: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image_thumb5.png" width="244" height="108" /></a></p>
<p>Now add a new member of type log4net.ILog to the Script class</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image6.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;border-left: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image_thumb6.png" width="550" height="193" /></a></p>
<p>Create a new method called InitLogger</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image7.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;border-left: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image_thumb7.png" width="550" height="354" /></a></p>
<p>Now add some logging to your main method. In this example all levels of severity are being logged to the file h:\ssis\TestLog_.log</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image8.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;border-left: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2015/01/image_thumb8.png" width="549" height="196" /></a></p>
<p>the line of code logger.Info(“Test Log Message”) writes a single line of output to the logging file.</p>
<p>You can adjust the severity level by calling the appropriate method. This is a code sample from <a href="http://www.beefycode.com/post/Log4Net-Tutorial-pt-1-Getting-Started.aspx">www.beefycode.com</a></p>
<p><code>logger.Debug(</code> <code>"Hello World!"</code> <code>);</code></p>
<p>```logger.Info(<code> </code>"I'm a simple log4net tutorial."<code> </code>);`</p>
<p>```logger.Warn(<code> </code>"... better be careful ..."<code> </code>);`</p>
<p>```logger.Error(<code> </code>"ruh-roh: an error occurred"<code> </code>);`</p>
<p>```logger.Fatal(<code> </code>"OMG we're dooooooomed!"<code> </code>);`</p>
<p>There are a few other nuances to be aware of. When you close and re-open your SSIS project, you will need to restore the Log4Net Nuget package in the script task/component. To accomplish this, simply re-open the Nuget Package Manager. You will see a message stating <em>‘Some NuGet packages are missing from this solution. Click to restore from your online package sources’</em>. Click ‘Restore’ in the upper right corner. The Nuget Package Manager will restore the same version of the package referenced in the Packages.config file. </p>
<p>If you want to Update to a newer version of the package, click the ‘Updates’ option in the left navigation pane. All available updates should be displayed.</p>
<p>Log4Net is a robust logging framework that provides programmers granular control over logging to a variety of target outputs (examples include file, console, email, etc.). Detailed documentation can be found on the Apache Log4Net <a href="http://logging.apache.org/log4net/">site</a>.</p>https://salvoz.com/posts/2014-11-23-azure-vm-local-disk-performance.htmlAzure VM Local Disk Performanceasalvo2014-11-23T12:04:03Z<p>Azure Virtual Machines, have always had a local non-persistent disk attached to them, with the size of this disk varies with the size of the VM. In the lead up to the release of the D-Series VMs, the performance of the local disk on the original A-Series has been, in my opinion throttled in order for the D-Series to be marketed more effectively. In the graph below, you can see my personal IOPs benchmarks for the Azure local disk. Performance steadily increased, before dropping off sharply this fall with the D-Series showing the same performance which was once available on the A-Series.</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/11/AzureVMDiskPerformance.png"><img class="alignnone size-medium wp-image-2372" alt="AzureVMDiskPerformance" src="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/11/AzureVMDiskPerformance-300x176.png"/></a></p>
<p> </p>
<p>It’s unfortunate that the A-Series had to suffer this drop in performance, but it’s not too bad when you look at the pricing.</p>
<p><a href="http://azure.microsoft.com/en-us/pricing/calculator/?scenario=virtual-machines">Azure Retail Pricing for Standard VMs</a>:</p>
<p>D11 (2 cores, 14GB ram, 100GB SSD): $0.366/hr, $272.31/month</p>
<p>A5 (2 cores, 14GB ram): $0.30/hr, $223.20/month</p>
<p>D13(8 cores, 56 GB ram, 400 GB SSD): $1.318/hr, $980.60/month</p>
<p>A7 (8 cores, 56 GB ram): $1.20/hr, $892.80/month</p>
<p>In addition to the SSD on the D-Series, you also get 60% faster CPU over the A-Series.</p>
<p>Azure Virtual Machines, have always had a local non-persistent disk attached to them, with the size of this disk varies with the size of the VM. In the lead up to the release of the D-Series VMs, the performance of the local disk on the original A-Series has been, in my opinion throttled in order for the D-Series to be marketed more effectively. In the graph below, you can see my personal IOPs benchmarks for the Azure local disk. Performance steadily increased, before dropping off sharply this fall with the D-Series showing the same performance which was once available on the A-Series.</p>https://salvoz.com/posts/2014-11-06-installing-net-3-5-on-server-2012r2-azure-vms.htmlInstalling .Net 3.5 on Server 2012R2 Azure VMsasalvo2014-11-06T03:36:34Z<p>There are several blog posts, technet articles and Server Fault posts about installing .net 3.5 on Windows 8 and Server 2012/R2. The gist of it, is that you use the command line tool dism, with the /Source switch to specify the path to the SXS folder from the installation media used. Here is an example of the dism command:</p>
<div class="code-block">
<pre><code>dism.exe /online /enable-feature /featurename:NetFx3 /norestart /LimitAccess /Source:"\\FilleServer\Software\WindowsUpdates\Server2012R2-Update-SXS" /All
</code></pre>
</div>
<p>I have run into a couple of issues when running this on Azure VMs created using the latest versions of the Windows 2012 R2 Datacenter gallery image which I wanted to share.</p>
<ol>
<li>The install media change between RTM and Update 1.</li>
</ol>
<p>When this happened, I had to download the ISO that had update 1 slipstreamed an extract the SXS folder. You’ll notice in my example command above, the path contains \Server2012R2-Update-SXS. I have a corresponding folder that has the RTM bits.</p>
<ol start="2">
<li>Security Updates <a href="https://support.microsoft.com/kb/2966827" title="https://support.microsoft.com/kb/2966827">2966827</a> or <a href="https://support.microsoft.com/kb/2966828" title="https://support.microsoft.com/kb/2966828">2966828</a>, fixed by <a href="https://support.microsoft.com/kb/3005628" title="https://support.microsoft.com/kb/3005628">3005628</a>.</li>
</ol>
<p>The two security updates, 2966827 and 2966828, broke the ability to install .Net 3.5. Those two updates have been fixed so that they are only installed if .Net 3.5 has already been installed. Update 3005628 can be used to fix the issues if you have already installed one of the two problem updates, as was the case with the latest Azure gallery image.</p>
<p>I ended up using the MSU package (<a href="http://go.microsoft.com/fwlink/?LinkID=517245&clcid=0x409" title="http://go.microsoft.com/fwlink/?linkid=517245&clcid=0x409">Download the Windows8.1-KB2966828-x64.msu package now)</a>, as I figured that the gallery images met the criteria: “Customers who are aware that the DISM <a href="http://technet.microsoft.com/en-us/library/hh825265.aspx" title="http://technet.microsoft.com/en-us/library/hh825265.aspx">“/StartComponentCleanup” and “/ResetBase”</a> option was used on their system…”</p>
<p>There are several blog posts, technet articles and Server Fault posts about installing .net 3.5 on Windows 8 and Server 2012/R2. The gist of it, is that you use the command line tool dism, with the /Source switch to specify the path to the SXS folder from the installation media used. Here is an example of the dism command:</p>https://salvoz.com/posts/2014-05-23-connect-to-sharepoint-online-list-via-csom-in-ssis.htmlConnect to SharePoint Online List via CSOM in SSISjsalvo2014-05-23T03:22:10Z<p>Retrieving data from a list in SharePoint Online requires a slightly different implementation than SharePoint on-prem. The following example uses C# and CSOM to retrieve data from a SharePoint Online list in an SSIS dataflow script component. </p>
<p>1. First you’ll need to add two references to the SSIS Script Component Visual Studio project. To add a reference right click references and select ‘Add Reference’.</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/05/image.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;border-left: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/05/image_thumb.png" /></a></p>
<p>Click the .NET tab and then add the following two reference to your project:</p>
<ul>
<li>Microsoft.SharePoint.Client</li>
<li>Microsoft.SharePoint.Client.Runtime</li>
</ul>
<p>2. Update the Namespaces region so it is similar to the following:</p>
<div class="code-block">
<pre><code class="language-cs">#region Namespaces
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Client;
using System.Security;
#endregion
</code></pre>
</div>
<p><strong>Note:</strong> most of the namespaces are already included by default, the bottom three were added for this implementation.</p>
<p>3. Add the following variables to the class definition:</p>
<div class="code-block">
<pre><code class="language-cs"> private SecureString secureStr;
private ClientContext cc;
private List spList;
private String query = "<View><Query><OrderBy><FieldRef Name=&#8217;ID&#8217;/></OrderBy></Query></View>";
</code></pre>
</div>
<p>The query variable stores the Caml query that will be used to retrieve all fields for all items in the list and sorts them by ID. More information regarding Caml can be found <a href="http://sharepoint-works.blogspot.com/2012/05/caml-query-tutorial-for-sharepoint.html">here</a>.</p>
<p>3. When connecting to a list in SharePoint Online, you need to use the class SharePointOnlineCredentials to set the user and password credentials. The Client Context class is used to establish a connection to SharePoint. The following code is from the method called CreateNewOutputRows().</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/05/image1.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;border-left: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/05/image_thumb1.png" /></a></p>
<p><strong>Some helpful notes:</strong></p>
<ul>
<li>Variable.SharepointURL is a URL to the SharePoint site collection where the list resides. Something similar to https://<SharePoint Instance>/sites/<site name>.</li>
<li>The name of the SharePoint list being referenced is “Benchmark Category”.</li>
<li>The SecureString class must be used for SharePoint Online Credentials. cc.Credentials = new SharePointOnlineCredentials(User.SharePointUser, secureStr).</li>
<li>When iterating through the items in the list, I am referencing each item based on the key associated to the list field. In the SharePoint list we have a field with the name ‘Product Category’, but the key associated to that field is ‘Title’; therefore, I am referencing this field using item[“Title”]. We also have a field that is displayed as ‘Benchmark Category’ and the key for this field is ‘Benchmark_x0020_Category’.</li>
<li>The ‘Benchmark Category’ field is defined as type ‘Choice’ in SharePoint (users can select from a list of items in a drop-down); therefore, I needed to cast item[“Benchmark_x0020_Category”] to a String array and then retrieve the item at the first index (0).</li>
</ul>
<p>Retrieving data from a list in SharePoint Online requires a slightly different implementation than SharePoint on-prem. The following example uses C# and CSOM to retrieve data from a SharePoint Online list in an SSIS dataflow script component. </p>https://salvoz.com/posts/2014-05-13-create-display-sub-folder-for-tabular-ssas-measures.htmlCreate Display Sub Folder for Tabular SSAS Measuresjsalvo2014-05-13T09:26:46Z<p>If you have a SSAS tabular cube with a large number of measures in a particular measure group, you may want to group specific measures into display sub-folders. Display sub-folders help keep a cube more organized and reduce the amount of scrolling end-users need to do to locate the measures they want to add to a pivot table in Excel. As far as I am aware, there is no option to create a sub-folder from the SQL Server Data Tools GUI (without installing a 3rd party tool such as BIDS Helper). Fortunately, you can create display sub-folders without the need to install a 3rd party tool by following the steps listed below. Thanks to <a href="http://blog.opensourcesql.com/2013/08/create-display-sub-folders-for-tabular.html" title="http://blog.opensourcesql.com/2013/08/create-display-sub-folders-for-tabular.html">OpenSourceSQL.com</a> for the original blog post on this topic.</p>
<ol>
<li>Right click the Model.bim file for your solution and choose View Code. You’ll be presented with the XML behind the model.</li>
<li>Find the <CalculationProperty> section related to the measure you want to add to a sub folder. The actual tag where the measure is named is <CalculationReference>.</li>
<li>The XML tag that adds a sub folder is <DisplayFolder>…</DisplayFolder>. Add this inside the <CalculationProperty> tag and enter the desired folder name.<a href="http://2.bp.blogspot.com/-tEuKnTdF1K4/Ug4_eGJ0_EI/AAAAAAAAADs/A_GaDBMK348/s1600/xmltag.JPG"><img alt="" src="http://2.bp.blogspot.com/-tEuKnTdF1K4/Ug4_eGJ0_EI/AAAAAAAAADs/A_GaDBMK348/s400/xmltag.JPG" /></a></li>
<li>When you connect to the cube in Excel and add a pivot table, the measure should now be displayed in the appropriate sub folder.<a href="http://4.bp.blogspot.com/-K_JMaDRlW4U/Ug4_9fn6e8I/AAAAAAAAAD0/Z4GMRZGeWAc/s1600/xxCapture.JPG"><img alt="" src="http://4.bp.blogspot.com/-K_JMaDRlW4U/Ug4_9fn6e8I/AAAAAAAAAD0/Z4GMRZGeWAc/s320/xxCapture.JPG"/></a></li>
</ol>
<p>If you have a SSAS tabular cube with a large number of measures in a particular measure group, you may want to group specific measures into display sub-folders. Display sub-folders help keep a cube more organized and reduce the amount of scrolling end-users need to do to locate the measures they want to add to a pivot table in Excel. As far as I am aware, there is no option to create a sub-folder from the SQL Server Data Tools GUI (without installing a 3rd party tool such as BIDS Helper). Fortunately, you can create display sub-folders without the need to install a 3rd party tool by following the steps listed below. Thanks to <a href="http://blog.opensourcesql.com/2013/08/create-display-sub-folders-for-tabular.html" title="http://blog.opensourcesql.com/2013/08/create-display-sub-folders-for-tabular.html">OpenSourceSQL.com</a> for the original blog post on this topic.</p>https://salvoz.com/posts/2014-01-26-garmin-vector.htmlGarmin Vectorasalvo2014-01-26T04:48:47Z<p>I bought my first power meter, a <a href="http://www.powertap.com/">Power Tap</a> at the start of the 2010 Tri season, which also happened to be the year I trained for my first Iron Man. Four years later and I’m getting ready to start training for my second Iron Man and figured why not upgrade <img class="wlEmoticon wlEmoticon-smile" style="border-top-style: none;border-left-style: none;border-bottom-style: none;border-right-style: none" alt="Smile" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/wlEmoticon-smile.png" /> Last week I got my <a href="http://sites.garmin.com/vector/">Garmin Vector</a> and wanted to give some first impressions and thoughts. For a complete in-depth review head over to <a href="http://www.dcrainmaker.com/2013/09/garmin-vector-review.html">DCRainmaker.com</a>.</p>
<p>I’ve been following the development of the Vector since before Garmin bought the original company, MetriGear. A power meter that measures power at the pedals (and is built into the pedals) has several advantages over other systems that measure power, and are built into other bike components. The main types of power meters are:</p>
<ol>
<li>Wheel Hub (i.e. Power Tap)</li>
<li>Crank (i.e. <a href="http://www.srm.de">Srm</a>, <a href="http://www.quarq.com/">Quarq</a>, <a href="http://www.stagescycling.com/">Stages</a>)</li>
<li>Pedal (i.e. <a href="http://www.lookcycle.com/en/us/route/pedales/keo-power.html">Look</a>, Garmin Vector, at least one other I can’t remember)</li>
</ol>
<p>The advantages of the Vector system (at least in my opinion)</p>
<ol>
<li>Uses ANT+ (Look’s system has been our for awhile, but is not ANT+ compatible)</li>
<li>Can use any rear wheel</li>
<li>Can use different crank-arm lengths on my Speed Concept and Madone</li>
<li>Can move relatively easy between bikes</li>
<li>Uses Look Keo Cleats (I’ve been using Look Keo pedals for 5 years)</li>
</ol>
<p>Disadvantages of Vector</p>
<ol>
<li>Price compared to some of the other systems (i.e. PowerTap, Stages)</li>
<li>It’s new (no history of durability, reliability, etc)</li>
<li>Doesn’t support Bluetooth LE (although not many power meters do at this point)</li>
<li>Bike/Crank Arm Compatibility (<a href="https://www.youtube.com/watch?v=zgs0mxvSZlQ&list=PLZQteomUj15PeX7QXCRqizTTCh3z2YZVb&index=2">Watch this video</a> to see how to determine if Garmin will work with your bike and Crank Arms).</li>
</ol>
<p><strong>Installation</strong></p>
<p>The installation was not as hard as I thought it was going to be. Garmin provides an excellent set of videos on their website (<a href="http://sites.garmin.com/vectorowner/" title="http://sites.garmin.com/vectorowner/">http://sites.garmin.com/vectorowner/</a>), and DC Rainmaker also does a good job explaining the install process in his in-depth review.</p>
<p>I spent around 30 minutes installing the pedals for the first time. This included hooking them up to the computer to verify they had the latest firmware (2.10), removing the old pedals, carefully re-reading the instructions, using the torque wrench to tighten pedals for the first time, etc. I also had to update the firmware on my Edge 800 for some Vector specific functionality. One of the more important footnotes in the installation process is the requirement to tighten the pedals to 25 foot lbs. Again, the Garmin web site and DC Rainmaker have lots of details on this, including recommendations on the tools you will want to use.</p>
<p>One advantage to hooking your new pedals up to your computer is that you can see the device ID. This is useful if you have two power meters and your are trying to pair one power meter with one head unit, and the other power meter to a second head unit. Knowing the device id of the power meters allows you to just type the id in (at least on a edge 800). You can also set your crank-arm length while connected to your computer.</p>
<p>Once the pedals are installed, you should pair them to your head unit, in my case, it’s a Garmin Edge 800. After that, you can set your crank-arm length and set the installation angle (just follow the on screen prompts and pedal ~90 RPM). </p>
<p>I would highly recommend that you watch the videos to understand what is involved in the installation before you make your purchase. What I have written hear is a very brief summary. I think one of the primarily selling points of the Vector is being able to move it between bikes <u>easily</u> and I think the Vector will live up to this. DC Rainmaker has a video showing that it takes him around five minutes to move the pedals between bikes.</p>
<p><strong>Usage / Analytics</strong></p>
<p>Once installed, using the Vectors are pretty much the same as using any other type of power meter. Before starting your ride, you should always do a static calibration. This involves having the bike upright, crank arms at the 3 and 9 o’clock position, being unclipped from the pedals and selecting calibrate from your head unit. After starting your ride, you should do a second calibration which involves coasting for a bit as you pedal backwards smoothly for 5 consecutive revolutions. A message will appear on the screen (at least on Edge units) indicating calibration was successful.</p>
<p>During your ride, you get all the metrics provided by other power meters, and which can be displayed on your head unit. On my edge 800, the only new metric I get with the Vector is Left/Right balance (and averages over 30 seconds, lap and ride). It sounds like you may get a few more with an Edge 810 (Pedal Smoothness and Toque Effectiveness).</p>
<p>After your ride, you simply upload your data to your favorite training website or analysis tool. Garmin Connect, which is free, shows all the common power based metrics, as well as some Left/Right data. Here are a few screen shots that show that mainly shows that the Vector works like any other power meter, but with the addition or Left/Right data.</p>
<p>Power Summary Data<br />
<a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image4.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;padding-top: 0px;padding-left: 0px;margin: 0px;border-left: 0px;padding-right: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image_thumb4.png" /></a></p>
<p>Power Graph<br />
<a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image5.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;padding-top: 0px;padding-left: 0px;margin: 0px;border-left: 0px;padding-right: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image_thumb5.png" /></a></p>
<p>Right/Left Balance<br />
<a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image6.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;padding-top: 0px;padding-left: 0px;margin: 0px;border-left: 0px;padding-right: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image_thumb6.png" /></a></p>
<p> </p>
<p>I also did a comparison between the Vector and Power Tap using a method similar to DC RainMaker, which involves the use of a <a href="http://www.npe-inc.com/products/products-wasp.html">Wasp</a> unit. The Wasp captures Ant+ data from multiple devices, including multiple power meters at the same time (something normal head units can’t do) and can output the data to a CSV file. I applied 30 second averaging and came up with two graphs, one for each interval I did yesterday which shows the power from each device, as well as the differences between.</p>
<p>I did a static and dynamic calibration before starting the recording for each interval. I expected that the vector would show higher power numbers since the power tap is measuring at the wheel and there is a loss of power due to the drive train. I’m not sure what happened at the 723 second mark during the first interval. I know at the 2410 second mark on the second interval, I did a second dynamic calibration just to see what would happen. I’ll have to keep a better log of what happens during my rides in order to determine what may be causing this interesting spikes/dips/changes.</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image7.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;padding-top: 0px;padding-left: 0px;margin: 0px;border-left: 0px;padding-right: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image_thumb7.png"/></a></p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image8.png"><img title="image" style="border-top: 0px;border-right: 0px;border-bottom: 0px;padding-top: 0px;padding-left: 0px;margin: 0px;border-left: 0px;padding-right: 0px" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image_thumb8.png" /></a></p>
<p>**** </p>
<p><strong>Final Thoughts</strong></p>
<p>So far, after two rides and the initial installation, I can say I’m happy with my Garmin Vector pedals. I’ll do some more comparison rides between my Power Tap and Vector as I start to get serious about training once more. Once spring hits, I’ll probably sell off my Power Tap as I don’t need two power meters. My wife has a Power Tap that I can use as a backup just in case, or if I want to do some more comparisons.</p>
<p>Is it worth the premium price? I think if you see the same advantages in the unit as I do, then I think it’s worth it. The requirement (or desire) to have different crank arm lengths rules out the crank based power meters for me. If that’s not a requirement, and your are looking to be a little more frugal, you should take a look at the Stages power meter.</p>
<p>The Stage power meter is getting a lot of press lately because it was recently announced that Sky Cycling will be using Stages instead of Srm this year. Why is that noteworthy? Because unlike the Srm system they used last year (and other crank based systems like Quarq), the Stages system only measures power from one crank arm (it’s installed on only one crank arm) and then doubles it. So in certain situations, the power shown will not be as accurate. However quite a few people have pointed out that if it’s good enough for a pro team like Sky, it’s good enough for them. </p>
<p>I bought my first power meter, a <a href="http://www.powertap.com/">Power Tap</a> at the start of the 2010 Tri season, which also happened to be the year I trained for my first Iron Man. Four years later and I’m getting ready to start training for my second Iron Man and figured why not upgrade <img class="wlEmoticon wlEmoticon-smile" style="border-top-style: none;border-left-style: none;border-bottom-style: none;border-right-style: none" alt="Smile" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/wlEmoticon-smile.png"> Last week I got my <a href="http://sites.garmin.com/vector/">Garmin Vector</a> and wanted to give some first impressions and thoughts. For a complete in-depth review head over to <a href="http://www.dcrainmaker.com/2013/09/garmin-vector-review.html">DCRainmaker.com</a>.</p>https://salvoz.com/posts/2014-01-07-sharepoint-2010-excel-services-and-ssis.htmlSharePoint 2010 Excel Services and SSISjsalvo2014-01-07T08:18:35Z<p>As increasing amounts of data are stored in SharePoint in various formats such as lists, Excel documents and Word documents, gaining access to this data is an on-going challenge for Business Intelligence developers. Fortunately, SharePoint 2010 provides some nice features that make accessing this data a bit less onerous. </p>
<p>For example, Excel Services in SharePoint 2010 provides two new programmatic models to help developers gain access to data in workbooks. These models are the ECMAScript (JScript, JavaScript) object model and the Excel Services REST API. </p>
<p>This blog post will focus specifically on the Excel Services REST API and how this API may be used in conjunction with a SSIS script component to retrieve data from a SharePoint based Excel spreadsheet in a data flow.</p>
<p>As a first step, you will need to ensure that the data in your spreadsheet is structured as an Excel table. This isn’t necessary to use the Excel Services REST API in general, but it is required for the example I will demonstrate in this blog post. </p>
<p>To structure your data as an Excel Table, simply highlight the range of data in your Excel sheet and click ‘Table’ (under the INSERT menu). This will add a Table called Table1.</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image.png"><img style="border-bottom: 0px;border-left: 0px;border-top: 0px;border-right: 0px" title="image" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image_thumb.png" /></a></p>
<p>Once you have the data in an Excel Table format, you will need to identify the appropriate URL to leverage the REST API features. The URL will be similar to the following:</p>
<p><code>https://intranet.mysite.com/sites/mysite/_vti_bin/ExcelRest.aspx/Shared%20Documents/book1.xlsx/model $format=atom</code></p>
<p>Let’s break the URL down into smaller components:</p>
<table border="0" cellspacing="0" cellpadding="2" width="506">
<tr>
<td valign="top" width="200">
<strong>URL Component</strong>
</td>
<td valign="top" width="304">
<strong>Description</strong>
</td>
</tr>
<tr>
<td valign="top" width="200">
<code>http://intranet.mysite.com/sites/mysite</code>
</td>
<td valign="top" width="304">
The path to the site or sub-site where the Excel workbook resides.
</td>
</tr>
<tr>
<td valign="top" width="200">
<code>/_vti_bin/excelrest.aspx</code>
</td>
<td valign="top" width="304">
The relative path to the web service
</td>
</tr>
<tr>
<td valign="top" width="200">
<code>/shared%20documents/book1.xlsx</code>
</td>
<td valign="top" width="304">
The relative path to the Excel workbook
</td>
</tr>
<tr>
<td valign="top" width="200">
<code>/model$format=atom</code>
</td>
<td valign="top" width="304">
The command or commands for the Web service. In this case, you are requesting a “model” of the workbook in an ATOM feed.
</td>
</tr>
</table>
<p>Since we are interested in the data contained in Table1, we will modify the URL slightly. </p>
<p><a href="https://intranet.mysite.com/sites/mysite/_vti_bin/ExcelRest.aspx/Shared%20Documents/book1.xlsx/**Model/Tables%28%27Table1%27%29?%24format=atom*">https://intranet.mysite.com/sites/mysite/\_vti\_bin/ExcelRest.aspx/Shared%20Documents/book1.xlsx/**Model/Tables%28%27Table1%27%29?%24format=ato</a>*</p>
<p>We have now reached the point where we can add a Script Component to our data flow and start writing some code to access the REST API. I am going to assume you already have an understanding of Script Components in SSIS, so I won’t go into too much detail here other than the following points:</p>
<p>1. When you add the script component to your dataflow, be sure to select the option labeled ‘Source’, since we are using the script to pull data from SharePoint.</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image1.png"><img style="border-bottom: 0px;border-left: 0px;border-top: 0px;border-right: 0px" title="image" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image_thumb1.png" /></a></p>
<p>2. Make sure you add the columns you intend to pull from the Excel spreadsheet as Output Columns in your Script Component and set their data types accordingly. An example is shown below:</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image2.png"><img style="border-bottom: 0px;border-left: 0px;border-top: 0px;border-right: 0px" title="image" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image_thumb2.png"/></a></p>
<p>Now, we will edit the script and add some code (I am using the C# language in this example).</p>
<p>First, you’ll need to add a new Service Reference to the Script Component. To accomplish this, right click on ‘Service References’ and select ‘Add Service Reference’.</p>
<p>You’ll need to populate the Address field with the correct service URL, which should be similar to:
<a href="https://intranet.mysite.com/sites/mysite/_vti_bin/ExcelService.asmx">https://intranet.mysite.com/sites/mysite/_vti_bin/ExcelService.asmx</a></p>
<p>You’ll also want to update the Namespace field with a logical name, in my example I called the Namespace ‘ExcelService’. When finished, click ‘OK’.</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image3.png"><img style="border-bottom: 0px;border-left: 0px;border-top: 0px;border-right: 0px" title="image" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2014/01/image_thumb3.png"/></a></p>
<p>You’ll also need to add the following references and namespaces to your script, if they aren’t already included.</p>
<div class="code-block">
<pre><code class="language-cs">#region Namespaces
using System;
using System.Data;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Xml.Linq;
using System.IO;
using System.Xml;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;
#endregion
</code></pre>
</div>
<p>Two namespaces are used in the XML you will be querying: the ATOM feed namespace and the Excel Services REST namespace. We need to include strings in our class that specify these namespaces. </p>
<p>Within the class ScriptMain, add the following two lines of code:</p>
<div class="code-block">
<pre><code class="language-cs">
const string atomNameSpace = "http://www.w3.org/2005/Atom";
const string xlsvcNameSpace = "http://schemas.microsoft.com/office/2008/07/excelservices/rest";
</code></pre>
</div>
<p>We’ll now add a method called LoadRanges to our ScriptMain class. In the LoadRanges method we will set credentials to access the API, build a URI and return a XDocument via the <a href="http://msdn.microsoft.com/en-us/library/bb343181(v=vs.110).aspx">XDocument.Load</a> method that stores the data from the spreadsheet. Sample code is shown below:</p>
<p><strong>Note:</strong> Variables.SharePointURI is referencing a variable in my package that stores a URL path similar to the one shown earlier in this blog post (<a href="https://intranet.mysite.com/sites/mysite/_vti_bin/ExcelRest.aspx/Shared%20Documents/book1.xlsx/Model/Tables%28%27Table1%27%29?%24format=atom">https://intranet.mysite.com/sites/mysite/\_vti\_bin/ExcelRest.aspx/Shared%20Documents/book1.xlsx/Model/Tables%28%27Table1%27%29?%24format=at</a>)</p>
<div class="code-block">
<pre><code class="language-cs">private XDocument LoadRanges()
{
// Use XMLUrlResolver to pass credentials to the Web service.
XmlUrlResolver resolver = new XmlUrlResolver();
resolver.Credentials = System.Net.CredentialCache.DefaultCredentials;
// Build the URI to pass the resolver.
Uri fullUri = new Uri(Variables.SharePointURI);
return XDocument.Load((Stream)resolver.GetEntity(fullUri, null, typeof(Stream)));
}
</code></pre>
</div>
<p>We now need to update the CreateNewOutputRows() method. You won’t be able to simply copy the code below because this code is specific to the spreadsheet structure (number of columns, ordering of columns, etc.). You will likely need to spend some time debugging to determine what logic works for your specific Excel workbook.</p>
<div class="code-block">
<pre><code class="language-cs">public override void CreateNewOutputRows()
{
XNamespace a = atomNameSpace;
XNamespace x = xlsvcNameSpace;
var doc = LoadRanges();
// Query the XDocument for all the row elements, skip first row with headers
var rows = doc.Descendants(x + "row").Skip(1).ToList();
foreach (var r in rows)
{
var allValues = r.Descendants().Where(n => n.Name == (x + "v") || n.Name == (x + "fv")).ToList();
Output0Buffer.AddRow();
Output0Buffer.Rank = Convert.ToInt32(allValues[0].Value);
Output0Buffer.Category = allValues[2].Value;
Output0Buffer.Score = Convert.ToDecimal(allValues[11].Value);
Output0Buffer.DRI = allValues[13].Value;
}
}
</code></pre>
</div>
<p>Hopefully, this example code helps you get started with Excel Services. If you need a more detailed overview, I suggest you check out this <a href="http://msdn.microsoft.com/en-us/library/ff640648%28v=office.14%29.aspx#restapi">documentation</a>.</p>
<p>As increasing amounts of data are stored in SharePoint in various formats such as lists, Excel documents and Word documents, gaining access to this data is an on-going challenge for Business Intelligence developers. Fortunately, SharePoint 2010 provides some nice features that make accessing this data a bit less onerous. </p>https://salvoz.com/posts/2014-01-05-mdx-root-function.htmlMDX Root Functionjsalvo2014-01-05T09:26:50Z<p>This week I discovered an incredibly useful MDX function, the function is called Root. Full documentation can be found <a href="http://technet.microsoft.com/en-us/library/ms146076.aspx">here</a>.</p>
<p>In the past, I have encountered scenarios where I want to override filtering and use the All member for specific attribute hierarchies. I have usually accomplished this by hard-coding the All member in an MDX expression, something similar to:</p>
<p>([Measures].[All Sales], [Customer].[Market].[All]).</p>
<p>But this approach becomes unwieldy if you need to override a large number of attribute hierarchies (or possibly even all attribute hierarchies in a dimension or cube). It also can be problematic if the cube is updated and a new attribute hierarchy is added, in which case you may need to update your expression to account for the new attribute hierarchy. The Root function helps avoid these pitfalls.</p>
<p>The Root function returns the <strong>All</strong> member for each attribute hierarchy within the current scope in a cube, dimension or tuple (If an attribute hierarchy does not have an All member, then the default member is used). So, it eliminates hard-coding the All member for multiple attribute hierarchies in an MDX expression.</p>
<p>There are three syntaxes for the Root function:</p>
<p>**Cube syntax: Root(). ** The Root function in this case will return the All member (or default member if the All member does not exist) from each attribute hierarchy in the cube.</p>
<p>**Dimension syntax: Root(Dimension_Name). ** In this scenario, the Root function will return the All member (or default member if the All member does not exist) for each attribute hierarchy in the specified dimension.</p>
<p>For example, if you want to override all filters applied to any attribute hierarchy in the Customer dimension the syntax is:</p>
<p>Root([Customer])</p>
<p>**Tuple syntax: Root(Tuple_Expression). ** When a tuple expression is specified, then the Root function will return a tuple that contains the intersection of the specified tuple and the All members of the other dimension attributes not included in the specified tuple.</p>
<p>For example, if you wanted to override all filters on a specific dimension except those applied to two attribute hierarchies (in this example, Market and Dealer Level) the syntax would be similar to:</p>
<p>Root(([Customer].[Market].CurrentMember,[Customer].[Dealer Level].CurrentMember))</p>
<p>You can also specify a specific member (instead of the CurrentMember) as shown below:</p>
<p>Root([Date].[Jan 1, 2013])</p>
<p>This week I discovered an incredibly useful MDX function, the function is called Root. Full documentation can be found <a href="http://technet.microsoft.com/en-us/library/ms146076.aspx">here</a>.</p>https://salvoz.com/posts/2013-12-24-mdx-implementing-logical-and-on-members-of-the-same-hierarchy.htmlMDX Implementing Logical AND On Members of the Same Hierarchyjsalvo2013-12-24T04:12:19Z<p><strong>Reference:</strong> MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook by Tomislav Piasevoli</p>
<p>In my <a href="2013-12-14-mdx-implementing-logical-or-on-members-of-different-hierarchies.html">prior blog post</a>, I discussed implementing logical OR in MDX on members of different hierarchies. As a follow-up to that post, I will now demonstrate how to implement logical AND on members of the same hierarchy. If you have not read my prior post, I recommend doing so before proceeding.</p>
<p>In MDX, when two or more members of the same set are placed on a slicer, OR logic is automatically applied. For example, if you have two members (e.g. sizes ‘Small’ and ‘Large’) on a slicer, the resulting dataset will contain data where the first, second or both members occurred in the associated fact table. Similarly, a tuple formed by members of multiple hierarchies (e.g. size ‘Small’ and color ‘Red’) will only return results that have data associated to both members; AND logic is applied in this instance.</p>
<p>To summarize, in MDX</p>
<ul>
<li>Sets imply logical OR</li>
<li>Tuples imply logical AND</li>
</ul>
<p>What if you need to apply AND logic to members of the same hierarchy?</p>
<p>For example, what if we want to only see data for months where we sold products to customers in both Mexico and Brazil?</p>
<p>Let’s run an MDX query for Brazil only. </p>
<div class="code-block">
<pre><code class="language-sql">SELECT {[Measures].[Unique Purchasers]} ON 0, NON EMPTY {[GL Date].[Fiscal Month Of Year].MEMBERS} ON 1
FROM [MyCube]
WHERE {[Customers].[Country].&[Brazil]}
</code></pre>
</div>
<p>As we can see from the output we started selling product to customers in Brazil starting in Aug 2013.</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2013/12/image.png"><img style="border-bottom: 0px;border-left: 0px;border-top: 0px;border-right: 0px" title="image" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2013/12/image_thumb.png"/></a></p>
<p>Now let’s run a similar MDX query for Mexico. The output is shown below:</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2013/12/image1.png"><img style="border-bottom: 0px;border-left: 0px;border-top: 0px;border-right: 0px" title="image" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2013/12/image_thumb1.png"/></a></p>
<p>In this case, we have data back to 2011.</p>
<p>If we restrict this output to only include Aug 2013 onward, we see that we have sold product to 54 unique customers in Mexico.</p>
<div class="code-block">
<pre><code class="language-sql">SELECT {[Measures].[Unique Purchasers]} ON 0
FROM [MyCube]
WHERE ({[Customers].[Country].&[Mexico]},{[GL Date].[Fiscal Month Of Year].&[Aug 2013]:NULL})
</code></pre>
</div>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2013/12/image2.png"><img style="border-bottom: 0px;border-left: 0px;border-top: 0px;border-right: 0px" title="image" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2013/12/image_thumb2.png" /></a></p>
<p>To only include data for months when we sold product to customers in both Mexico and Brazil, we can use the following MDX query:</p>
<div class="code-block">
<pre><code class="language-sql">SELECT {[Measures].[Unique Purchasers]} ON 0, NON EMPTY {[Customers].[Country].MEMBERS} ON 1
FROM (
SELECT {[Customers].[Country].&[Brazil],[Customers].[Country].&[Mexico]} ON 0
FROM [MyCube]
)
WHERE
(
EXISTS(
EXISTS( {[GL Date].[Fiscal Month Of Year].[Fiscal Month Of Year].MEMBERS},
{[Customers].[Country].&[Brazil]}, "Sales"
),
{[Customers].[Country].&[Mexico]}, "Sales"
)
)
</code></pre>
</div>
<p>The output of this query is:</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2013/12/image3.png"><img style="border-bottom: 0px;border-left: 0px;border-top: 0px;border-right: 0px" title="image" border="0" alt="image" src=" https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2013/12/image_thumb3.png"/></a></p>
<p>As you can see, the output for both Brazil and Mexico is the aggregate of the months starting in Aug 2013 onward since we are only including months where we had sales in both countries.</p>
<p>A few note-worthy details:</p>
<p>In this query, we have a sub-select that includes the two members [Customers].[Country].&[Brazil] and [Customers].[Country].&[Mexico].</p>
<p>In the WHERE clause we are using two Exists, the first argument is the ‘Fiscal Month of Year’ level (not hierarchy). It is important to note that you need to use the level and not the hierarchy for this query to work properly. We also need to specify the measure group of interest, in this case "Sales". The second Exists function uses the output of the initial Exists as its first argument.</p>
<p><strong>Reference:</strong> MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook by Tomislav Piasevoli</p>https://salvoz.com/posts/2013-12-14-mdx-implementing-logical-or-on-members-of-different-hierarchies.htmlMDX Implementing Logical OR On Members of Different Hierarchiesjsalvo2013-12-14T14:27:03Z<p><strong>Reference:</strong> MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook by Tomislav Piasevoli</p>
<p>In MDX, when two or more members of the same set are placed on a slicer, OR logic is automatically applied. For example, if you have two members (e.g. sizes ‘Small’ and ‘Large’) on a slicer, the resulting dataset will contain data where the first, second or both members occurred in the associated fact table. Similarly, a tuple formed by members of multiple hierarchies (e.g. size ‘Small’ and color ‘Red’) will only return results that have data associated to both members; AND logic is applied in this instance.</p>
<p>To summarize, in MDX</p>
<ul>
<li>Sets imply logical OR</li>
<li>Tuples imply logical AND</li>
</ul>
<p>What if you need to apply OR logic to members of different hierarchies (e.g. tuples)?</p>
<p>In this example, we will combine the results of the following two MDX queries into a single query by implementing OR logic between members of the two hierarchies.</p>
<p>The first MDX query returns the quantity shipped for orders handled by Inside Sales Reps, broken down by fiscal month of year:</p>
<div class="code-block">
<pre><code class="language-sql">SELECT {[Measures].[Quantity Shipped]} ON COLUMNS, NON EMPTY { [GL Date].[Fiscal Month Of Year].MEMBERS } ON ROWS
FROM [MyCube]
WHERE [Sale Source].[ISR].&[Y]
</code></pre>
</div>
<p>The second MDX query returns the quantity shipped for online orders, again broken down by fiscal month of year:</p>
<div class="code-block">
<pre><code class="language-sql">SELECT {[Measures].[Quantity Shipped]} ON COLUMNS, NON EMPTY { [GL Date].[Fiscal Month Of Year].MEMBERS } ON ROWS
FROM [MyCube]
WHERE [Customer].[Market].&[Online]
</code></pre>
</div>
<p>To combine the result of the above queries in a single MDX query, we must first take the two members ([Sale Source].[ISR].&[Y] and [Customer].[Market].&[Online]) and convert them into tuples of the same dimensionality. To accomplish this, we combine each member with the ‘All’ or root member of the other dimension as shown below (in this case, we have two tuples):</p>
<div class="code-block">
<pre><code class="language-sql">([Sale Source].[ISR].&[Y], [Customers].[Market].[All])
([Sale Source].[ISR].[All],[Customers].[Market].&[Online])
</code></pre>
</div>
<p>We then combine these tuples into a set (the tuples are separated by a comma and surrounded by curly brackets that define the set). The set is located in the WHERE clause of our new query:</p>
<div class="code-block">
<pre><code class="language-sql">SELECT {[Measures].[Quantity Shipped]} ON COLUMNS, NON EMPTY { [GL Date].[Fiscal Month Of Year].MEMBERS } ON ROWS
FROM [MyCube]
WHERE { ([Sale Source].[ISR].&[Y], [Customers].[Market].[All]), ([Sale Source].[ISR].[All],[Customers].[Market].&[Online])}
</code></pre>
</div>
<p>In this case, the [Quantity Shipped] value from our combined query is the sum of the [Quantity Shipped] values from the original two MDX queries since there in no overlap in each queries resulting output.</p>
<p><strong>Reference:</strong> MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook by Tomislav Piasevoli</p>https://salvoz.com/posts/2013-12-07-mdx-order-by-multiple-criteria.htmlMDX Order by Multiple Criteriajsalvo2013-12-07T14:14:55Z<p>The MDX language includes an ORDER function that orders members of a set in ascending or descending order based on a numeric (or string) expression. The syntax of the Order function is displayed below:</p>
<p>Numeric expression syntax:</p>
<div class="code-block">
<pre><code class="language-sql">
Order(Set_Expression, Numeric_Expression [ , { ASC | DESC | BASC | BDESC } ] )
</code></pre>
</div>
<p>String expression syntax:</p>
<div class="code-block">
<pre><code class="language-sql">
Order(Set_Expression, String_Expression [ , { ASC | DESC | BASC | BDESC } ] )
</code></pre>
</div>
<p>The implementation is straightforward if we want to sort by a single criteria. But what if we need to sort by multiple criteria? For example, we may first want to sort by Year in descending order and then sort product categories in descending order based on sales. To accomplish this, we need to use the order function multiple times in a single query.</p>
<p>An example is shown below:</p>
<div class="code-block">
<pre><code class="language-sql">SELECT {[Measures].[Sales]} ON COLUMNS,
ORDER (
ORDER ([Dates].[Fiscal Calendar].[Year].MEMBERS*[Items].[Product Category].MEMBERS, [Measures].[Sales], BDESC),
[Dates].[Fiscal Calendar].CurrentMember.MEMBER_CAPTION, BDESC)
ON ROWS
FROM [Cube]
</code></pre>
</div>
<p>It is important to note that the first sort criteria is specified in the outer ORDER function, in this case it is Year. The inner ORDER function specifies the second sort criteria (Product Category ordered by Sales).</p>
<p>The MDX language includes an ORDER function that orders members of a set in ascending or descending order based on a numeric (or string) expression. The syntax of the Order function is displayed below:</p>https://salvoz.com/posts/2013-12-03-sql-server-data-mining-time-series-algorithm-tips.htmlSQL Server Data Mining: Time Series Algorithm Tipsjsalvo2013-12-03T14:35:59Z<ul>
<li>The Time Series algorithm has several parameters that may be adjusted if needed. I have tried adjusting several of these parameter settings and determined that the following appear to have the most impact:
<ul>
<li>MISSING_VALUE_SUBSTITUTION: If you have any blanks or gaps in your data and the MISSING_VALUE_SUBSTITUTION parameter is not set appropriately, your model processing will fail. There are several options you can select from when setting this parameter:
<ul>
<li>Previous: The prior value is used when missing data is encountered.</li>
<li>Mean: The missing data is replaced with the mean of the entire series.</li>
<li>A Number: You may also specify a hard-coded value to be used whenever missing data is encountered.</li>
</ul>
</li>
<li>PERIODICITY_HINT: If your data has periodic or seasonal trends, you’ll want to make sure this parameter is set to accurately reflect these trends. For example, if your data has quarterly and yearly periodicity you should set the PERIODICITY_HINT to {3,12}.</li>
<li>AUTO_DETECT_PERIODICITY: This parameter controls how aggressive the algorithm is at finding periodicities in the data. A value close to 1 causes the algorithm to detect very subtle periodicities. In contrast, setting this parameter to 0 will only detect the strongest periodicities (if any exist). If you set the PERIODICITY_HINT parameter, then the AUTO_DETECT_PERIODICITY is automatically set to a low value and the algorithm will abide by any settings you specified in the PERIODICITY_HINT parameter.</li>
<li>MINIMUM and MAXIMUM_SERIES_VALUE: These parameters allow you to specify a top and bottom range of valid values for the predictions.</li>
</ul>
</li>
<li>The time series algorithm is extremely sensitive to the amount and quality of historic data. Minimal or erratic historic data is an issue since the algorithm doesn’t have enough consistent history to learn from or discern useful patterns. Too much history may also be an issue. Extremely old data may no longer be indicative of future trends; therefore, it may make sense to exclude this data from the model. In my experience, roughly 20 historic data points seems to be the minimum amount of data to obtain reasonable predictions. If your data has periodicity, you’ll want to make sure you have enough history to cover several iterations of the periodicity interval (if your data has yearly periodicity you’ll want to have several years of history).</li>
<li>A common problem in time series predictions is limited historic data. As discussed earlier, you need a reasonable amount of consistent history to obtain accurate predictions. If you only have a few data points, but want to make predictions based off of the data then you may be able to use a model built with a different dataset, assuming you anticipate the new series to behave similarly. This technique involves using the REPLACE_MODEL_CASES flag. An example of the technique is detailed in the article <a href="http://www.sqlserverdatamining.com/ssdm/Default.aspx?tabid=102&Id=391">'Predicting Future Steps When You Do Not Have Enough History'</a> on the website <a href="http://www.sqlserverdatamining.com">www.sqlserverdatamining.com</a></li>
<li>Testing the accuracy of a Time Series model is also a challenge, waiting for several months into the future to see if your model works well is often not a viable option. The HISTORICAL_MODEL_GAP and HISTORICAL_MODEL_COUNT parameters can be used to test your model. When these parameters are set, the Microsoft Time Series algorithm builds additional models equal to the HISTORICAL_MODEL_COUNT value. These models are trained with a truncated series. The HISTORICAL_MODEL_GAP parameter determines how much of the data is truncated. You can then use the PredictTimeSeries function with negative values to ‘predict’ historic data. An example of this technique is covered in the article <a href="http://www.sqlserverdatamining.com/ssdm/Default.aspx?tabid=102&Id=53">'Determining the Accuracy of a Time Series Forecast'</a> on the website <a href="http://www.sqlserverdatamining.com">www.sqlserverdatamining.com</a></li>
<li>Another means to check the quality of predictions is via supplemental statistics (standard deviation, variance, and content node ID). The article named <a href="http://www.sqlserverdatamining.com/ssdm/Default.aspx?tabid=102&Id=53">'Determining the Accuracy of a Time Series Forecast'</a> on the website <a href="http://www.sqlserverdatamining.com">www.sqlserverdatamining.com</a> includes example DMX to obtain the supplemental statistics from a data mining model.</li>
</ul>
<p>The sqlserverdatamining site has a wealth of information related to many of the SQL Server data mining algorithms. The ‘Tips and Tricks’ section of the site is especially useful. An online forecasting tutorial covering the Time Series algorithm is also available at <a href="http://technet.microsoft.com/en-us/library/ms169846.aspx">http://technet.microsoft.com/en-us/library/ms169846.aspx</a>.</p>
<p>The sqlserverdatamining site has a wealth of information related to many of the SQL Server data mining algorithms. The ‘Tips and Tricks’ section of the site is especially useful. An online forecasting tutorial covering the Time Series algorithm is also available at <a href="http://technet.microsoft.com/en-us/library/ms169846.aspx">http://technet.microsoft.com/en-us/library/ms169846.aspx</a>.</p>https://salvoz.com/posts/2013-12-03-dsc-windows-feature-web-ftp-service.htmlDSC Windows Feature Web-Ftp-Serviceasalvo2013-12-03T07:07:08Z<p>This week I've started learning/using <a href="http://blogs.msdn.com/b/powershell/archive/2013/11/01/configuration-in-a-devops-world-windows-powershell-desired-state-configuration.aspx">Desired State Configuration</a>. I'm sure I will be blogging about it more in the coming weeks, but for now here is a quick overview of installing the FTP Role Feature. One of the built in resources for DSC is WindowsFeature which allows you to enable different features and roles on Windows. A DSC configuration file might look something like this:</p>
<div class="code-block">
<pre><code>
Configuration IISWebsite
{
Node Server1, Server2
{
WindowsFeature Web-Server
{
Ensure = "Present"
Name = "Web-Server"
}
}
}
</code></pre>
</div>
<p>But what options are there for the Name? Well if you execute the Get-WindowsFeature cmdlet, you will get a nicely formatted list, which lists them all and if they are installed are not. The following list is all the Webrole features from Server 2012 R2. <a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2013/12/WebFeatures.png"><img class="alignnone size-medium wp-image-2274" alt="WebFeatures" src="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2013/12/WebFeatures.png"/></a></p>
<p>Most of the examples out there show using a value of Name = 'Web-Server' which corresponds to the Web Role itself but it's not needed. You only need to specify the names for the child nodes and Windows will install the dependencies required.</p>
<div class="code-block">
<pre><code> Configuration IISWebsite
{
Node Server1, Server2
{
WindowsFeature Logging
{
Ensure = "Present"
Name = "Web-Http-Logging"
}
WindowsFeature Ftp
{
Ensure = "Present"
Name = Web-Ftp-Service"
}
}
}
</code></pre>
</div>
<p>And here is the verbose output from Start-DscConfiguration:</p>
<p><a href="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2013/12/DscFtpInstall.png"><img class="alignnone size-medium wp-image-2277" alt="DscFtpInstall" src="https://salvozshue1gen01.blob.core.windows.net/public/blog-post-content/2013/12/DscFtpInstall.png" /></a></p>
<p>This week I've started learning/using <a href="http://blogs.msdn.com/b/powershell/archive/2013/11/01/configuration-in-a-devops-world-windows-powershell-desired-state-configuration.aspx">Desired State Configuration</a>. I'm sure I will be blogging about it more in the coming weeks, but for now here is a quick overview of installing the FTP Role Feature. One of the built in resources for DSC is WindowsFeature which allows you to enable different features and roles on Windows. A DSC configuration file might look something like this:</p>