unpac your bacpac and get your day bac

Recently i needed to do a high priority “clone” of a database hosted in Azure. My normal process for these in the past has been to use the Export Data Tier Application / Import Data Tier Application approach. A .bacpac file is generated, stored locally on disk or in cloud storage and then used to create a new database using the Import feature. I have used both the Azure management portal and SQL Server Management Studio. Seems the SQL Management Studio approach works across Azure subscriptions, or at least works easier for me since the Azure Portal approach fails with authentication issues when I attempt to import to a different subscription than the bacpac was captured from. Overall this approach works fine and has served me well. However. it can be a bit slow, especially when using the Management Studio. The Studio is running local on your machine, has to download the bacpac from the cloud and then push the all the resulting creates/inserts etc. up to the cloud DB.

Anyway, bac to the other day. The export took longer than usual, then the import just seemed to hang. After hours of watching it spin I started to get concerned and started looking a bit more closely. The tables it was hung on were log tables, that had not been cleaned out in a while. In fact the table it was “stuck” on had north of 17 million rows in it and after a few hours of processing had only completed 2-3 million. I had to use the snapshot I had taken, the data had moved on since and a go-back was not possible.

So, long story short(er): I started researching (googling) while the process continued. I found that the bacpac file was just a renamed .zip file and could be extracted and perhaps even edited. This gave me a way out. I extracted the bacpac file to see what I could find.

image

Here, nicely laid out is the package that is a bacpac file. Opening the Data\ folder will show you the data that is targeted for each table nicely arranged by table name.

image 

This made up set of tables shows the layout. Simply choose the tables that have expendable data, such as log entries, error logs, traces etc. Open those files and delete the .BCP files that are contained in them. In my case I had around 9 GB of un-compressed .BCP files that I could delete. 

Now just re-compress those files, rename the resulting .zip files to .bacpac and you are good to go. One note: when you re-zip those files, make sure to do so from the root of the bacpac folder that was extracted, not the parent folder you extracted to. If you compress the whole folder, the nesting of the files will be off and you will get an error on import like this:

bacpac-error

Which basically is telling you that you re-zipped at the wrong level. 

you got your merge tool in my diff

In a past I had recommended using WinMerge as a replacement diff and merge tool in Visual Studio. I simply linked to a post that showed the set up.

Since then my preferences have shifted toward kdiff3, but mostly as a merge tool. I still favor WinMerge for simple diffs as I can set it up to ignore all whitespace changes and get a cleaner view of what really changed between those two versions of a file. I may be missing something in kdiff3 configurations but I couldn’t find a way to make that happen. However for merges kdiff3, with it’s 3 way view and common sense commands, makes it incredible for complex merges. So now I use the “right” tool for each job. Another plus is that visually I can always easily tell if I am doing a simple diff or attempting a merge.

Every time I have to set this up on a new machine I have to search for the command line arguments to pass. So, I basically am just posting this so I can find it next time.

Now if I can only get Git to use the same combination. So far it seems friendly to kdiff3 but not WinMerge…

 

WinMerge as diff tool

In Visual Studio navigate to Tools>Options>Source Control>TFS>Configure User Tools. Configure your merge tool as shown here


c:\program Files (x86)\winmerge\winMergeU.exe
/e /u /wl /wr /dl %6 /dr %7 %1 %2

 compare-config

kdiff3 as merge tool

Configure your merge tool as shown here


c:\program Files (x86)\kDiff3\kdiff3.exe
%3 --fname %8 %2 --fname %7 %1 --fname %6 -o %4

 

merge-config

wpf bootstrapping datatemplates–the chicken and the egg

I saw this post about MVVM binding issues go by on StackOverflow the other day and wanted to post a reply but never got around to it.

The basic issue the poster was dealing with was how to bootstrap a WPF application to load the first view and use a DataTemplate in a ResourceDictionary to define the View that would be displayed. Let’s build this from the “bottom” up as it were.

Define the View and ViewModel
What do you want to see and how should it look?

Let’s start with the ViewModel. The goal in the question was to show that two textboxes in the UI were bound to the same value and would show real-time the updates to the value they shared. So we have defined one public property: TestData that will be displayed/edited in our View.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.ComponentModel;

namespace WpfTemplateBootstrap
{
/// <summary>
/// ViewModel implementing INotifyPropertyChanged so that View will be notified of changes
/// to property values
/// </summary>
class MainWindowViewModel: INotifyPropertyChanged
{
public MainWindowViewModel()
{
//set an initial value -- makes it easier to see if binding is correct on load
_testData = "hello";
}

private String _testData;
public String TestData
{
get { return _testData; }
set
{
_testData = value;
//must raise property changed with correct property name (case sensitive) for UI to stay in sync
OnPropertyChanged("TestData");

}
}

public void OnPropertyChanged(string name)
{
PropertyChangedEventHandler handler = PropertyChanged;
if (handler != null)
{
handler(this, new PropertyChangedEventArgs(name));
}

}
public event PropertyChangedEventHandler PropertyChanged;
}
}

Things to note about supporting binding in the ViewModel:

  • It must implement INotifyPropertyChanged.
  • Values that will be bound to must be a public Properties.
  • Each property that is going to be bound in the UI must raise PropertyChanged to publish the fact that its value needs to be refreshed in the UI wherever it is bound.
  • The property name that is passed to the PropertyChangedEventHandler must match the property name exactly, it is case sensitive.
  • Adding default values at first will help show if the binding is working
  • If binding does not seem to be working always check the output window as it will show any binding errors that will help resolve the issue.

Now for the View. I think this is an exact copy of the sample code from the SO question, with one change: it is not a Window but a UserControl, since we want to load this as a View which will be hosted in a containing Window. The TextBox controls are both bound the same Property: TestData. Also, the UpdateSouceTrigger is set to “PropertyChanged” which updates the ViewModel on every key stroke, not just when you tab out, which is the default.

<UserControl x:Class="WpfTemplateBootstrap.MainWindowView"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
mc:Ignorable="d"
d:DesignHeight="300" d:DesignWidth="300">
<Grid>
<TextBox Height="23" HorizontalAlignment="Left" Margin="61,14,0,0"
Name="textBox1" VerticalAlignment="Top" Width="120"
Text="{Binding Path=TestData, Mode=TwoWay,
UpdateSourceTrigger=PropertyChanged}"
/>
<Label Content="Test:" Height="28" HorizontalAlignment="Left" Margin="12,12,0,0"
Name="label1" VerticalAlignment="Top" Width="43" />
<Label Content="Result:" Height="28" HorizontalAlignment="Left" Margin="10,46,0,0"
Name="label2" VerticalAlignment="Top" />

<TextBox Height="23" HorizontalAlignment="Left" Margin="61,48,0,0"
Name="textBox2" VerticalAlignment="Top" Width="120"

Text="{Binding Path=TestData, Mode=TwoWay,UpdateSourceTrigger=PropertyChanged}"/>
</Grid>

</UserControl>

Simple View/ViewModel loading

Now that we have defined what we want to see in the UI, we need to somehow get this loaded in our UI when the application starts up. The simplest path to this goal would be to just add an instance of our MainWindowView to our MainWindow.xaml.

<Window x:Class="WpfTemplateBootstrap.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="clr-namespace:WpfTemplateBootstrap"
Title="MainWindow" Height="350" Width="525">
<local:MainWindowView />
</Window>

In order for this to work, somebody has to load up an instance of the MainWindowViewModel to be the data context for the View. Without this there is no object for the data binding engine to wire up to. In this simple model, the View code behind is “ViewModel aware” (gasp!) and loads up an instance of the correct ViewModel when it is initialized like this.

public partial class MainWindowView : UserControl
{
private MainWindowViewModel _vm;

public MainWindowView()
{
InitializeComponent();

_vm = new MainWindowViewModel();
this.DataContext = _vm;
}
}

One of the things about this question that caught my interest was that it seems the poster had tried this approach first, and had it working, but wanted to achieve a “cleaner” separation of logic from UI and so went to the next step of using DataTemplates to define the View so that the View could be completely agnostic as to the loading and type of the ViewModel. A noble goal to be sure, so let’s wire up that approach …

Template the ViewModel so it has a defined UI presence

With this approach we are going to turn things around. Instead of explicitly creating a View that we want to display in the UI and then trying to figure out how to get that View wired up to a ViewModel instance. We are going to add an instance of the ViewModel to the UI, and let the WPF framework figure out how to represent that object. So, let’s change our MainWindow. Using a ContentControl we can add our ViewModel to the UI, even though it is not a UI type itself.

<Window x:Class="WpfTemplateBootstrap.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="clr-namespace:WpfTemplateBootstrap"
Title="MainWindow" Height="350" Width="525">
<ContentControl>
<ContentControl.Content>
<local:MainWindowViewModel />
</ContentControl.Content>
</ContentControl>
</Window>

If we run our application now, WPF will do its best and call ToString() on our object and show the results in the UI. Pretty cool, but not very useful.

ViewModelToString

So, next we define a DataTemplate. Think of a DataTemplate as a way of saying to WPF: “when you run into an instance of this Type that you need to display, this it what it looks like …” We could do this right in the MainWindow.xaml, but one goal of the question was to use a ResourceDictionary. This comes in handy when the Type will be displayed in different Windows in our application. So to set this up, add a ResourceDictionary to our project. Also reference that dictionary in the App.xaml file so that it is scoped, and available, to the entire application.

<!-- MainResourceDictionary.xaml file -->
<ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="clr-namespace:WpfTemplateBootstrap"
>

<DataTemplate DataType="{x:Type local:MainWindowViewModel}">
<local:MainWindowView />
</DataTemplate>

</ResourceDictionary>


<!-- App.xaml file -->
<Application x:Class="WpfTemplateBootstrap.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
StartupUri="MainWindow.xaml">
<!-- our application scope ResourceDictionary refs go here-->
<Application.Resources>
<ResourceDictionary Source="MainResourceDictionary.xaml" />
</Application.Resources>
</Application>

We can remove the code behind we added to the MainWindowView.xaml.vb now, since the View is no longer responsible for loading the ViewModel. We can run the application and see that our bootstrap and binding works.

bootstrapDone

jQuery mobile: navbar data-iconpos overrides button attributes

Ran into this today as I was moving from a standard footer with optional buttons to a fixed footer laid out as a navbar as shown here.

navbar

Not a big deal, but I had started defining my data-iconpos values (left | right | top | bottom | notext) at the button level, which is where you need to do it usually. However when the buttons are wrapped inside a navbar widget, it has its own data-iconpos attribute. Makes sense, you can set it at the top level and not have to manage each button individually. Also making sense is the fact that the default value for a navbar widget is “top” vs. the button widget’s default value of “left”. In the navbar layout icons on top really does looks better.

What I didn’t see coming was that setting, or even just leaving the default by not explicitly setting, the data-iconpos as the navbar level enforces the layout of all the buttons, and the data-iconpos setting on the buttons has no effect. Kind of a reverse cascade I guess. :-)

<div data-role="footer" data-position="fixed">
<div data-role="navbar">
<ul>
<!-- this data-iconpos value will have no effect -->
<li><a href="@Url.Action("Index","Home")" data-role="button" data-iconpos="left" data-icon="home">Home</a></li>
<li><a href="@Url.Action("Index","Home")" data-role="button" data-icon="search">Search</a></li>
<li><a href="@Url.Action("Index","Home")" data-role="button" data-icon="gear">Settings</a></li>
</ul>
</div>
</div>

FUD driven development

disclaimer: You ever notice how some people use the word “passionate” when what they really mean is “annoying”? Hopefully I am not that guy. Smile 

The fact is, here it comes … I get passionate (insert your synonym of choice) about code, both good and bad. I got particularly emotional, shall we say, about a piece of code I needed to read and duplicate in the current prototype I am working on.

My task for the day was re-implementing existing logic in an MVVM application in a ASP.NET MVC prototype. My workflow for the day was shaping up nicely: target an existing piece of functionality, say a View in the WPF application; do a quick re-design with a smaller form factor and a web platform in mind; build out a quick set of View/Controller/Model and Repository classes; wire these into the already existing navigation by adding a link or button to access the new page; then all I needed to do before I could show real data on the screen was to open up the WPF code, find the ViewModel that was loading the same data I was looking for, and make sure I was apples to apples when it came to security and business logic decisions etc. Life was good.

I opened the ViewModel I needed to pull logic from, and an interesting thing happened. At first I thought I was just annoyed and frustrated because the code was not written as well as I would have liked. On later reflection, however, I realized that what really happened was that the code didn’t so much annoy me as it did un-nerve me. It rattled me and totally killed my confidence in what I was doing. So I thought I would analyze just what about the code had this effect and see if I could define it better for myself.

I am not posting real code here, so what follows are pseudo-code examples that demonstrate the flavor of the code in question.

The basic structure of the method is this:

public void LoadData(ReportType contentOption)
{
GetOrdersArgs args = new GetOrdersArgs();
args.SalesPersonId = LoggedInUser.Id;
args.ReportType = contentOption

//other code here

GetOrderData data = SalesRepository.GetOrders(args);
return data;
}

Create the args needed to send to the service, call the repository, return the data. Nice and clean. The real code had more arguments but this should suffice to make the point.

As I was reading the code the first thing that jumped out at me was the fact that the SalesPersonId argument value was being set to the logged in user’s id. In most cases this was fine, but there was also the ability for a manager to view sales data for salesmen they managed, so the logged in user’s id was not always the correct id to pass as an argument. At first glance I thought I had found a bug in the code. Then I read on.

//bug fix. based on the contentOption enum that is requested, the SalesPersonId may need to be changed
//
switch(contentOption)
{
case 1:
args.SalesPersonId = LoggedInUser.Id;
break;
case 2:
args.SalesPersonId = GetSalesPersonId();
break;
//etc
}
//end bug fix


So, what happened here was now fairly clear: the bug was found, perhaps in testing … this is good. The cause, submitting the incorrect id in cases of manager access was properly understood … good as well. The correct fix was inserted and we’re batting a thousand here! So what’s the problem? It isn’t so much what was done as it was what was not done. The old code was left in place. This makes the first batch of code misleading and gives any reviewer, future debugger, a false impression of what is being done in the method, even leading to my false assumption that I had just found a bug. At this point I would understand if your reaction was “so what?”, no big deal etc. However, keep in mind that this sample has been dumbed down to fit in two tidy code snippets to demonstrate the issue, imagine the real code in question was well over a hundred lines long, and the affected logic was multiple lines of code, not one simple assignment.

With all that being said, the question is: why? With the bug understood, the fix understood and implemented, why stop there? The answer is what drove me to write this post: Fear Uncertainty & Doubt (FUD). The method could be considered complex, that I will grant, the only reason I can see for holding back from finishing the job is fear, fear of breaking something else unintentionally. So basically, the thinking could go like this:

  • “I see here at line 115 the value is incorrect because the code above did not account for the manager access scenario.”
  • “I could review lines 1-114 to find the original problem, fix it there, retest the method …”
  • “… or I could just insert logic here to ‘fix’ the problem as I see it”

This is just one small step towards a big ball of mud. The fear to make the code better as we go ensures that we will make it worse. Another lesson I learned is that FUD begets FUD. I was “afraid”, in a way, of  the code in question because I thought I knew what it did and now I was unsure of it’s intention, I had to do more work to make sure I understood the code, and was able to replicate the logic correctly. That, I believe, is what caused my original uneasy feeling when reviewing the method. Unfortunately, if the developer working on the next fix in this code took the same approach, we would then have 2 misleading code segments that were made unnecessary by yet another fix later in the code. Wash, rinse, repeat.

So, when it comes to fixing code and making our code base better and more maintainable as we go, the old mantra stands: “if we are not part of the solution, we are part of the problem”. If our code does not add clarity, or we write, or even just allow to survive, code that is unnecessary or misleading then even if we “fixed the bug” at the end of the day we’ve left the code worse off then we found it.

new season new logo

Felt like Shultz was getting a bit tired, so the team has revamped as the Vogon Poets.

May their game be better than their prose.

vogonPoets

learning F# five minutes at a time

When I started working thru the ProjectEuler problems, I started out coding the solutions in C# but later switched to F#. I had wanted a reason to learn a little functional programming, and this seemed like a fun way to do it. Little did I realize what a stretch of my object oriented, imperative, mind it would be.

Since the switch to F# I would have to admit that the time-to-solution is about 1/2 learning the math and 1/2 trying to grok how things are done in F# or in functional languages in general. The leap from object thinking and imperative programming to functional is non-trivial, at least for me. It’s one thing to take C# code in your head and convert, or port, it to F# and quite another to learn the functional way of accomplishing what you have in your head. It is a frank admission of how new I am to functional concepts that the first time I tried to figure out how to create a List(Of T) in F# I couldn’t figure out the syntax because there was this other pesky List class “getting in the way” of my imperative genius! Figured I had to do some reading only to find that Lists are one of the core data structures in functional languages and entirely different than List(Of T) (and way cool by the way). In fact look up LISP “the mother of all functional” on Wikipedia and find this definition (emphasis mine):

The name LISP derives from "LISt Processing". Linked lists are one of Lisp languages' major data structures, and Lisp source code is itself made up of lists. As a result, Lisp programs can manipulate source code as a data structure, giving rise to the macro systems that allow programmers to create new syntax or even new domain-specific languages embedded in Lisp.

Doh!

So that may have been my lesson one in the “this is not your momma’s .NET” class. If so lesson two is shaping up to be functions themselves. Seeing that all programming languages have functions I might be forgiven for thinking that I had this one handled. However: it is one thing to say “everything is a function” and quite another to realize that every thing is a function! Some cases in point:


let x = 144

 

A value declaration is actually a function x of type unit –> int (no value in int out). Note: the term variable just doesn’t fit in a language where every thing is, by default, immutable. You can force mutability but as a quote I just heard on DotNetRocks put it so well: “every time you type the mutable keyword in F# somewhere a puppy dies”



let update = printfn "processing complete"

 

This also evaluates to a function of unit –> unit.

Something we would think of as a more traditional function might be declared as:



let f x = x+1

 

This function f can be expressed in F#-ese as int –> int. Basically it takes in an integer and returns an integer. Ho-hum. Now let’s kick it up a notch:



let f x y = x*y


This evaluates, no shocker to int –> int –> int. Notice that it is not int, int –> int. The (->) symbol is the function symbol and basically amounts to “when the function is applied to the value on the left the value on the right comes out. So, why are there two function symbols in the definition? Because there are two functions implied by f x y = x+y. The first function takes the value of x and returns a new function that will take in the value of y and return the final result (x+y). So if we were to write this in C# it might look something like this:



f_part1(x).Invoke(y)

 

So, we can actually call a function in stages, storing the midterm result of the function that needs to be called next. To show this I have a simple function in F# that computes the A value of a Pythagorean triplet. The function takes in two parameters: m & n. I can call the function on one line or only pass m on the first call then later call the function that was returned from the first partial call and now pass in n to get the final result.



let pyA m n = 
(sq m - sq n)
let partial = pyA 144
//partial is the function that was returned by partial evaluation of just the first of the two implied functions
let r = partial 12
printfn "result %d" r

 

It is interesting to take the compiled F# assembly and decompile it in Reflector to see what is going on under the hood. If you look at the C# version of the IL you see this:



[Serializable]
internal class partial@139 : FSharpFunc<int, int>
{
// Fields
[DebuggerBrowsable(DebuggerBrowsableState.Never), CompilerGenerated, DebuggerNonUserCode]
public int x;
// Methods
internal partial@139(int x)
{
this.x = x;
}
public override int Invoke(int y)
{
return Program.gcd(this.x, y);
}
}
//later the call to the function is shown as this:
int x = 0x90;
int r = new partial@139(x).Invoke(12);

 

 

So we can see a class (in the C# version) is created to store the partial result and expose the Invoke() function to complete the calculation. F# doesn’t have to jump thru these hoops, but it does help illustrate what is going in F# function evaluation. So some function rules we have gleaned:

 


    • every function takes in a value (no parameters in = a value of Unit)

 


    • every function returns a value (no return value = a value of Unit)

 


    • every function takes in only one value. What we see as multi-parameter functions are actually multiple functions chained together.

 

 

BTW: this info was gleaned from F# Survival Guide by John Puopolo with Sandy Squires. I found it at that link as a free ebook and it is a well written introduction to functional programming for the functionally challenged .NET programmer. ;-)

getting your math on

Recently I started working thru the ever growing list of cool math and algorithm based puzzles on ProjectEuler.net. Moving pretty slowly so far, but having fun learning some new things which is really the point of the exercise anyway. My basic approach so far is to digest a new puzzle and see if I can solve it based on what I already know. If I can’t then I let it steep for a while trying to figure out just what is that I don’t know I don’t know. After I have a clue what I might need to know I start Google-Binging, not for solutions but for information and explanations that will help me understand the problem and the solution. After that I need to, in most cases, code up an algorithm to solve the problem.

In searching for insights, I repeatedly come to the same sites and publications that I wanted to recommend and comment on here.

Most insightful: betterexplained.com

This is not a problem –> solution format, rather the author attempts to develop my “math intuition”. I always find a deeper understanding than I actually need to solve the current problem. I love when some light bulbs go off in previously dim corners of my mind. I found this site first when looking for a better understanding of prime numbers and how they factor in to the fabric of the universe. I have not yet ordered his book, but have read much of the content and would highly recommend it.

Most practical for the problem at hand: dr. Math (mathforum.org)

This is the problem –> solution format, and it works great for breaking down a problem and building up to the solution in a way that you come away with an understanding of how you got there, without just handing you a formula that actually answers the question. In fact, in most cases the final solution is left as an exercise to the reader once the foundation is laid. I first found the good Dr. when Google-Binging for ideas how to work through finding the number of divisors of a given number.

Best book (so far): Mathematics in 10 Lessons

This book has so not been practical for solving problems today, but has been a fun read so far that it takes math and builds it up from first principles. The stated goal of the book is to explain math for non-technical people in such a way that even an poet can at least glimpse the elegance that mathematicians see. Lofty goal and I wish him well, however the book is a fun read for the technical guy or gal who just want to fill in some gaps.

Most to be avoided: StackOverflow

As much as it grieves me to say it: while StackOverflow is a routine go-to place for programming topics, when it comes to ProjectEuler most of what I have found there have been people looking for someone to hand them the solution. What is the point of that? That’s like someone telling me how the movie is going to end. So, watch out for spoilers.

Know of other items that would be good additions? Please share.

The Blog is Dead … Long live “the Blog”

I have enjoyed blogging at weblogs.asp.net and the “boy named goo” blog. I thank them for their hosting. However, my use of that blog has really dropped off and I see traffic on that site as a whole seeming to do the same.

All that being said, the real reason ;-) to start a new blog on my own site is simply because I wanted to. I hope to be a bit more active here, keep my own bar low  for “blog worthiness” and enjoy writing more for what it is to me: a hobby and a way to process and save my pseudo-random stream of thoughts regarding development, technology and any other thing that comes to mind along the way.

About me

.NET developer in upstate NY, USA
Current focus technologies: WPF, WCF
Intrigued by: Functional programming ala F#, Code Analysis, Math
Hobbies: this blog, go figure

Month List